text stringlengths 4 2.78M |
|---|
---
abstract: 'The LIGO detectors are sensitive to a variety of noise transients of non-astrophysical origin. Instrumental glitches and environmental disturbances increase the false alarm rate in the searches for gravitational waves. Using times already identified when the interferometers produced data of questionable quality, or when the channels that monitor the interferometer indicated non-stationarity, we have developed techniques to safely and effectively veto false triggers from the compact binary coalescences (CBCs) search pipeline.'
author:
- 'J. Slutsky, L. Blackburn, D. A. Brown, L. Cadonati, J. Cain, M. Cavaglià, S. Chatterji, N. Christensen, M. Coughlin, S. Desai, G. González, T. Isogai, E. Katsavounidis, B. Rankins, T. Reed, K. Riles, P. Shawhan, J. R. Smith, N. Zotov, J. Zweizig'
title: Methods for Reducing False Alarms in Searches for Compact Binary Coalescences in LIGO Data
---
Introduction
============
In October 2007, the Laser Interferometer Gravitational-wave Observatory (LIGO) [@Abbott:2007kv] detectors completed a fifth data run over a two-year long period, denoted as S5, during which one year of triple-coincidence data was collected at design sensitivity. During S5, LIGO consisted of two interferometers in Hanford, Washington and one in Livingston, Louisiana. In Hanford, the interferometers had arm lengths of 4 km and 2 km, referred to as H1 and H2, respectively. In Livingston, there was a single 4 km interferometer, referred to as L1. The LIGO detectors were sensitive to the coalescences of massive compact binaries up to 30 Mpc for neutron stars [@Collaboration:2009tt; @Abbott:2009qj], and even farther for binary black holes. Ongoing “Enhanced” LIGO upgrades, as well as the future “Advanced” LIGO upgrades, are planned to increase the sensitivity significantly.
The LIGO Scientific Collaboration (LSC) performs astrophysical searches for gravitational waves from compact binary coalescences (CBCs), including the inspiral, merger, and ringdown of compact binary systems of neutron stars and black holes. These searches use matched filtering with template banks that include a variety of durations and frequency ranges for inspirals or ringdowns. When a LIGO interferometer is locked and operational, data are recorded in what it is called “science mode.” In order to confidently make statements on astrophysical upper limits and detections from science mode data, characterization of the LIGO detectors and their data quality is vital. The LIGO detectors are sensitive to a variety of noise transients of non-astrophysical origin, including disturbances within the instrument and environmental noise sources. Triggers generated by these disturbances may occur at different times and with different amplitudes for each template, increasing the false alarm rate in the searches for gravitational waves.
In this paper we discuss techniques for vetoing non-astrophysical transient noises effectively, and thereby reducing their effect on searches for gravitational waves. These methods were developed on searches for low mass CBCs in the first of the two years of S5 as described in [@Collaboration:2009tt; @Abbott:2009qj], though they are applicable to future searches as well.
In Section 2 we present the broad range of data quality issues that were present in LIGO data. In Section 3 we briefly review the search methods for which these techniques were developed. The techniques that we have so far developed and implemented to evaluate vetoes are explained in Section 4. The categorization and application of these vetoes is described in Section 5. In Section 6 we describe proposed methods to extend and automate our vetoes for use in future CBC searches. We present our conclusions in Section 7.
Data Quality Studies
====================
There are two broad categories of spurious transients in LIGO data: instrumental and environmental noises. Within these two classes of noise sources, there are dozens of identified phenomena that require vetoing. LIGO records hundreds of channels of data on the state of internal degrees of freedom of the interferometers, as well as the output from environmental sensors located nearby.
When a set of transients effect the stability or sensitivity of the gravitational wave data, members of the Detector Characterization and Glitch groups within the LIGO Scientific Collaboration (LSC) work to determine the source of these transients, using the auxiliary channel data [@Blackburn:2008ah]. When a given noise source is identified, it is documented with sets of time intervals referred to as data quality flags, which begin and end on integer GPS seconds. These data quality flags are discussed in Sections 2.1 and 2.2.
Alternatively, one can begin with the hundreds of data channels, and search for correlations between the outputs of some algorithm on these channels and the gravitational wave data channel. The “auxiliary channel” vetoes used by the CBC search are described in Section 2.3.
Data quality flags from Instrumental noise transients
-----------------------------------------------------
Each LIGO interferometer makes an extremely sensitive comparison of the lengths of its arms. This necessitates the ability to sense and control minute changes in displacement and alignment of the suspended optics, as well as intensity and phase in the laser [@Abbott:2007kv]. The control systems have both digital and analog components, as well as a variety of complex filtering schemes. Instrumental noise transients, or [*glitches*]{}, sometimes correspond to fluctuations of large amplitude and short duration in the control systems. While these may be prompted at times by environmental effects, they can be identified and vetoed by using the control channels alone, as they are well known failure modes of the control systems. We describe some examples in the following paragraphs.
- [Overflows.]{} The feedback control signals used to control the interferometer arm lengths and mirror alignments are processed and recorded in digital channels. When the amplitude of such a signal exceeds the maximum amplitude the channel can accommodate, it “overflows", and the signal abruptly flattens to read as this maximum value, until the quantity falls back below this threshold. This discontinuity in the control signal usually introduces transients at the time of the overflow.
- [Calibration line dropouts.]{} Signals of single frequency are continuously injected into the feedback control system to provide calibration. Occasionally, these signals “drop out" for short periods of time, usually one second. This discontinuous jump in the control signal produces artifacts in the data both when the calibration line drops out and when it resumes.
- [Light scattering.]{} The two interferometers at the Hanford site share the same vacuum enclosure. During times when one of the interferometers was locked and in science mode, and the other was not locked, the swinging of the mirrors of the unlocked interferometer scattered light into the locked interferometer. This produced strong, short duration transients, though not necessarily overflows.
- [Arm cavity light dips.]{} Brief mirror misalignments caused drops in the power in the arm cavities, and thus transients in the data.
Data quality flags from environmental noise transients
------------------------------------------------------
Environmental noise transients correspond to the coupling of mechanical vibrations and electromagnetic glitches that enter into the interferometer. Seismic motion, human activity near the LIGO sites, and weather are the most common sources of mechanical vibrations. Similarly to instrumental transients, we describe some examples that have been identified by the LIGO detector characterization group in the following paragraphs.
- [Electromagnetic disturbances.]{} The electronic systems of the interferometers are susceptible to electromagnetic interference, due both to glitches in the power lines and electronics noise at both sites. Magnetometers arrayed around the detector are used to diagnose these signals.
- [Weather-related transients.]{} The Hanford site is arid, with little to block the wind from pushing against the buildings housing the interferometers. Times at Hanford with wind speeds above 30 MPH are problematic. Weather and ocean waves also contribute to ground motion in the frequency range 0.1 Hz to 0.35 Hz at both sites, particularly Livingston.
- [Seismic disturbances.]{} Seismic activity from different noise sources have different characteristic frequencies. Earthquakes around the globe introduce transient noise in the frequency range 0.03 Hz to 0.1 Hz. Nearby human activities such as trucks, logging, and trains, produce disturbances with frequencies greater than 1 Hz, and can be so extreme that the interferometers often cannot stay locked. Even when they remain locked, significant noise transients frequently occur.
The ground motion described in the latter two bullets above occurs at frequencies below a few Hz, while the frequency range the interferometers are most sensitive in is from 100 Hz to 1000 Hz. There is non-linear coupling from the ground motion into the interferometers, which results in increased glitching at higher frequencies during times of high ground motion, and for this reason, these environmental effects are especially important.
KleineWelle triggers from auxiliary channels
--------------------------------------------
Data quality flags identify times affected by specific issues and use auxiliary channels as appropriate to locate the problems. Rather than starting with a known or plausible noise coupling, we search over hundreds of the auxiliary channels for transients that are coincident between any of these channels and the gravitational wave channel. This information can be used both for producing veto intervals, and for finding clues about the problems left unflagged.
A wavelet based algorithm, KleineWelle (KW) [@Chatterji:2004qg], is used to analyze interferometer control and environmental data. It is a valuable source of triggers for detector characterization because of its low computational cost allowing it to be applied to many data channels. The algorithm produces trigger lists contains the peak time and significance of the trigger. During S5, KW analyzed the gravitational wave channel and a variety of important auxiliary channels for the three LIGO detectors, including interferometer channels used in the feedback systems of the detectors and channels containing data from the environmental monitors.
Searches for Gravitational Waves from CBCs
==========================================
Matched filtering is the optimal method of finding known signals in data with stationary Gaussian noise [@wainstein:1962]. The searches for CBCs in S5 data used a matched filter method to compare theoretically predicted waveforms with the LIGO gravitational wave channel data [@Allen:2005fk]. Because the masses of the components of the binary determine both the duration and the frequency profile of the gravitational radiation, template banks consisting of many different waveforms are used [@Owen:1998dk].
![Matched filtering an impulse in Gaussian noise, colored based on the LIGO design sensitivity, using the CBC search with a single template. The result for two templates, top consisting of a binary of 1 solar mass objects, and bottom a binary of 8 solar mass objects. The red dashed line is the search threshold $\rho_{*}$ of 5.5, and the green circles indicate the positions of the resulting triggers. []{data-label="fig:twoTemplates"}](figure1.eps){width="80.00000%"}
When the signal to noise ratio $\rho$ of a template rises above a pre-set threshold $\rho_*$, a trigger corresponding to the time of coalescence for the binary system is recorded [@Allen:2005fk]. Single interferometer triggers are compared for coincidence in time and component mass parameters between the different interferometers, to produce coincident triggers. These coincident triggers are further subjected to signal consistency checks, for example a $\chi^2$ test in time/frequency bins between the template and the data [@Allen:2004]. In order to estimate the accidental rate of the coincident triggers, the searches perform “time-slide” analysis, in which the data from the interferometers are offset by time shifts large compared to the light travel time between the detectors, resulting in the production of coincident triggers that can only be of instrumental origin. Knowing the accidental rate, we can estimate the significance of unshifted coincidences. Those with the highest significance are examined as candidate events and, in the absence of a detection, used to calculate astrophysical upper limits [@LIGOS3S4all; @Collaboration:2009tt; @Abbott:2009qj].
Transients of non-astrophysical origin, as described in Section 2, often produce triggers of large $\rho$, as there is significant power in these transients [@LIGOS3S4Tuning]. Even with the signal consistency checks, disturbances of non-astrophysical origin increase the false alarm rate in the time-slides, as well as producing accidental coincidences from non-astrophysical events in the unshifted data. This has the effect of reducing the significance of the loudest events which are not caused by these transients, as the rate of coincidences in the time-slides is increased, and thus the measured false alarm rate of the events in unshifted time is elevated. It therefore has the effect of “burying” good gravitational wave candidates, as coincidences due to transient detector noises can produce significant outliers. In order to reduce these effects, we have learned to define time intervals within which triggers should not be trusted. These are called [*vetoes*]{}.
The CBC searches use banks of templates, each starting from a low frequency cutoff of 40 Hz (defined by the detector sensitivity) and increasing in amplitude and frequency until the coalescence time of the represented system. These templates have durations of up to 44 seconds for the lowest mass templates (binary of 1 solar mass objects).
Figure \[fig:twoTemplates\] depicts the filtering of an impulse in Gaussian noise, colored to match the LIGO noise spectrum, using the CBC search for a single template. We show the result for two different templates, the top panel consisting of a binary of 1 solar mass objects, and the bottom panel a binary of 8 solar mass objects. The dashed line is the search threshold $\rho_{*}$ of 5.5, and the circles indicate the positions of the resulting triggers. The response of the matched filter search to loud impulsive transients in the data is complicated, with multiple simultaneous effects of the data and search code. While the $\rho$ time series of both templates have a clear peak near the time of the impulse, the peaks do not perfectly overlap. The lower mass template has a long tail down from the peak, and there is a plateau of high $\rho$ extending 8 seconds before and after the time of the impulse. Additionally, while filtering with both templates results in a trigger with a large $\rho$ value, the higher mass template results in many more triggers. While many of the triggers shown would fail the $\chi^2$ test mentioned above, any that even marginally pass this test will increase the rate of accidental coincidences in the time-slides, adversely effecting the significance calculation of unshifted coincidences.
The difference in time and $\rho$ of the peak occurs because $\rho$ is recorded at the coalescence time of the matching template, and frequencies in each template are weighted by the frequency dependent noise spectrum of the detector. The time of the top panel trigger is 0.66 seconds after that of the bottom panel, whereas their $\rho$ values are 5000 and 34000, respectively. For a broadband transient, the time between when the transient occurs and when the coalescence time is recorded is determined by the time remaining in the waveform after its frequency content matches the sensitive frequency band of the detector (40 Hz to 1 kHz). No transient is a perfect delta function, and there are many transient types that have different timescales and frequency content. The $\rho$ tail visible in the top panel of Figure \[fig:twoTemplates\] occurs due to the aforementioned 44 second template duration. When any part of the template is matched against the impulse, the $\rho$ is significantly above threshold. Rather than record a trigger for all times with $\rho \geq \rho_{*}$, a trigger is only recorded if there is no larger value of $\rho$ within one template duration of its coalescence time. The higher mass template therefore results in more triggers because its duration is of order 3 seconds, while the plateau is 16 seconds long. The plateau is not caused by any attribute of the waveform, nor by the data adjacent the impulse, but rather is a phenomena intrinsic to the method used to estimate the power spectral density of the data. This occurs in the presence of impulsive transients in the data, as described in sections 4.6 and 4.7 of Ref. [@findchirp]. Vetoes of impulsive transients for CBC searches must include this time.
In cases of extraordinarily powerful transients, we observe a second trigger slightly more than one template duration from the trigger at the peak of the $\rho$ timeseries, as is the case in the top panel of Figure \[fig:twoTemplates\]. Because the actual search is performed using a bank of thousands of templates of different durations, a significant number of triggers multiple seconds away from the impulse are recorded, as can be observed for the triggers with $\rho \sim100$ to the right of the peak $\rho$ triggers in Figure \[fig:H2trig\].
For all these reasons, intervals containing transients of non-astrophysical origin often must be padded with extra duration to make them into effective vetoes for the CBC searches. This is done by examining the falloff of the triggers in $\rho$ before and after the transients in vetoed times, in order to include those triggers associated with the transient while working to minimize the deadtime by not padding more than necessary. For the S5 searches, this was determined by examining plots of the maximum and median amplitude transients, as measured by the peak trigger $\rho$. Efforts to automate this decision process are ongoing, as discussed in Section 5.2.
Prior to S5, all CBC searches used a single set of veto definitions [@Chatterji:2004qg; @Vetoes]. During S5, CBC searches extended over a broad range of component masses. The wide distribution in the template durations of the waveforms caused the triggers associated with transients in the data to appear at different times, and be sensitive to different frequency ranges. Thus veto window paddings must be defined based on template waveforms included in each search, which are determined by the possible component masses of the binaries.
Established Veto Techniques
===========================
In this section we discuss techniques for using the aforementioned detector characterization work to create vetoes for matched filter CBC searches. We illustrate our methods with examples from CBC searches in the first year of LIGO’s fifth science run, on which they were developed and implemented [@Collaboration:2009tt; @Abbott:2009qj]. This work was done simultaneously and in consultation with the veto efforts for the searches for unmodeled gravitational wave bursts [@Abbott:2009zi]. The veto techniques discussed in this section are implemented by creating lists of times during which triggers from a search are suspected of originating not from gravitational radiation, but from instrumental or environmental disturbances.
When deciding to use sets of time intervals as vetoes, we need to include all of the bad times of the interferometers, and as little of the surrounding science mode as possible. Since not all vetoes are well understood, we need to create figures of merit, or metrics, to evaluate the effectiveness of the vetoes. These veto intervals are derived both from data quality flags (Section 4.3) and disturbances in auxiliary channels (Section 4.4). We then classify these vetoes into categories (Section 5).
The [*safety*]{} of the veto intervals must also be ensured. A veto is unsafe if it could be triggered by a true gravitational wave. In order to insure that our instrumental vetoes are safe, we investigate their correlation with [*hardware injections*]{}, intentional transients introduced into the interferometer in order to properly tune the various searches in LIGO for gravitational wave signals. The signals are injected directly into the gravitational wave channel itself, the differential arm length servo, to simulate the effect that a gravitational wave would have on the detector. Since these hardware injections are intentional and controlled, they exist entirely within known time intervals.
Veto metrics
------------
Veto metrics were developed on single-interferometer triggers, from CBC searches in the first year of LIGO’s S5 science run, as well as previous runs. For this purpose, the triggers were clustered by keeping a trigger only when there are no triggers with larger value of $\rho$ within 10 seconds. With this clustering, all the triggers from a single loud transient occurred in one, or at most two, clusters. This had the effect of making the figures of merit independent of the number of waveforms in the template bank of the search. Different searches may still use different clustering times, based on the different waveform durations. A minimum $\rho$ of 8 for the clustered triggers was chosen in order to be sensitive to glitches that produce loud triggers from the template bank. This threshold applies to the clusters used to measure the metrics, but the resulting vetoes are applied to all triggers falling in vetoed times.
The percentage of triggers vetoed defines the [*efficiency*]{} of the veto $E = \frac{N_{vt}}{N_t}\cdot100\%$, where $N_{vt}$ is the number of clustered triggers vetoed and $N_t$ is the total number of clustered triggers. If all outliers came from a single source, then the ideal veto would have $100\%$ efficiency, especially at large $\rho$. In reality, there are many different sources of transient noise, as detailed in Section 2, and each may be responsible for only a few percent of the clustered triggers, and only in some specific range of $\rho$ values. The efficiency quantifies the effectiveness of the veto for removing clusters, but more information is required to determine whether this removal is warranted. Because the clustering is by loudest $\rho$, veto efficiency as a function of $\rho$ can be used to learn more about what population of clusters correspond to the transient event being vetoed.
To determine the statistical significance of the efficiency, we need to compare it with the [*deadtime*]{}. The percentage of science mode contained in a set of veto intervals defines the deadtime $D = \frac{T_v}{T}\cdot100\%$, where $T_v$ is the time vetoed and $T$ is the total science mode time, including the vetoed time. If the veto only includes truly bad times, then by vetoing triggers within these times we are not reducing our chance of detection, as the noise transients already polluted this data. In practice, the integer second duration of data quality flags, as well as the need for padded veto windows due to the nature of the CBC search (described in Section 4.2), limits how small the deadtime can be for veto intervals that remove common transient noises. It also adds to the probability that a true gravitational wave event, occurring when the detectors are in science mode, will be missed if these times are vetoed. Most vetoes have deadtimes of at least several tenths of a percent of the science mode time, although less understood or longer duration disturbances may lead to vetoes with larger deadtime percentages.
Important to the determination of what vetoes are effective is the ratio of the efficiency over the deadtime. Effective vetoes have a deadtime small compared to the efficiency, indicating that many more clusters are vetoed than one would expect by random chance. This ratio is unity for ineffective vetoes, and large for effective vetoes, and can be expressed as $$\label{eqn:EffDtRatio}
R_{ED} = \frac{E}{D} = \frac{N_{vt} \cdot T}{N_t \cdot T_v}.$$
The percentage of veto intervals that contain at least one clustered trigger defines the [*used percentage*]{} such that $U = \frac{N_{wt}}{N_w}\cdot100\%$ where $N_{wt}$ is the number of veto windows that contain at least one cluster and $N_w$ is the total number of windows. For an ideal veto, every vetoed interval should contain at least one cluster, corresponding to one or more loud transient noises. For vetoes with short time spans compared to the clustering time, it is more common to obtain values of the used percentage of less than 100%, even for effective vetoes. The statistical significance of this metric is made by comparing the used percentage for a veto to that expected if its intervals are uncorrelated with the triggers. This expected used percentage is obtained by multiplying the length of a given veto interval $T_w$ by the average trigger rate, given by dividing the number of triggers by the available science mode time. The ratio behaves similarly to $R_{ED}$, and can be expressed as $$\label{eqn:UsedPerRatio}
R_U = \frac{U}{T_w\frac{N_t}{T}\cdot 100\%} = \frac{N_{wt} \cdot T}{N_w \cdot T_w \cdot N_t} = \frac{N_{wt} \cdot T}{N_t \cdot T_v}.$$ In the S5 run, there was a clustered trigger on average every 6 to 17 minutes, depending on interferometer. Effective vetoes have an expected used percentage small compared to the actual used percentage, indicating more intervals contain clusters than one would expect by random chance.
To evaluate the safety of each veto, the percentage of hardware injections that are vetoed is compared to the veto deadtime. If veto intervals are correlated with the hardware injections, as indicated by a $R_{ED}$ for injections significantly greater than 1, the veto could be generated by an actual gravitational wave signal. Such a veto is therefore “unsafe” and is not used. In S5, zero data quality flags, and only one auxiliary channel, that were considered were found to be unsafe.
Vetoes from data quality flags
------------------------------
![A pair of overflows in the length sensing and control system of H2. At left, a time-frequency representation [@Chatterji:2004qg]. At right, the effect of the transient on the production of unclustered triggers for the CBC search with total mass between 2 and 35 solar masses (dots), the 10 second clusters of these raw triggers (circle), the original Data Quality flags (dashed lines), and the expanded data quality veto after duration paddings are applied (solid lines). []{data-label="fig:H2trig"}](figure2a.eps "fig:"){width="45.00000%"}![A pair of overflows in the length sensing and control system of H2. At left, a time-frequency representation [@Chatterji:2004qg]. At right, the effect of the transient on the production of unclustered triggers for the CBC search with total mass between 2 and 35 solar masses (dots), the 10 second clusters of these raw triggers (circle), the original Data Quality flags (dashed lines), and the expanded data quality veto after duration paddings are applied (solid lines). []{data-label="fig:H2trig"}](figure2b.eps "fig:"){width="45.00000%"}
We apply our veto metrics and window paddings to data quality flags created by the Detector Characterization and Glitch groups within the LSC, as mentioned in Section 2. A concrete example of a veto based on a data quality flag marking instrumental transients can be seen by examining the case of identified intervals containing an overflow in the length sensing and control loops for the H2 interferometer. These overflows cause severe glitches, and are identified within a second of their occurrence. The overflows themselves are caused by other disturbances to the control systems such as seismic motion, but irrespective of the physical origin, the overflow itself produces glitches in the gravitational wave channel. Figure \[fig:H2trig\] shows a time-frequency representation [@Chatterji:2004qg] of the gravitational wave channel, as well as a plot of the unclustered triggers as a function of time, for the CBC search with total mass between 2 and 35 solar masses, around two typical transients caused by a type of overflow. The data quality flag intervals start and stop on GPS seconds, and have a minimum duration of two seconds, centered around the times of the overflows, to ensure the glitches are not too close to the edges of the intervals. The loudest raw triggers occur near the transients, corresponding to the clustered trigger. Raw triggers subsequently fall off in $\rho$ over the next several seconds after the data quality interval.
As shown in Figure \[fig:H2eff\], this flag has a used percentage of 62% for a typical month during the first year of S5, indicating that these veto segments are well suited to vetoing triggers from transients. These data quality veto segments have efficiency on all triggers of 1.4%, and deadtime of 0.0037%. The ratio of the efficiency to the deadtime is more than 300. This indicates a veto with a statistically significant correlation to the triggers, as the expectation for random chance would be a ratio of 1. The efficiency is strongly dependent on the $\rho$ of the clustered triggers; for clusters with $\rho \geq 50$ it is 14%, while for clusters with $\rho \geq 1000$, the efficiency is 64%. Two thirds of the loudest triggers found from the H2 interferometer in this month were due to overflows.
To attempt to account for as many triggers as possible associated with this transient, the veto interval is given 1 second of padding prior to the data quality interval, and 4 seconds after. This alters the metrics, leading to a deadtime of 0.013%, an efficiency for all clusters of 1.7 %, a used percentage of 78 %, and an efficiency to deadtime ratio of 130. The expected used percentage from the trigger rate was only 0.58 %, giving $R_U$ of slightly over 130. For veto intervals with duration equal or less than the clustering time, only one cluster can be vetoed per interval, thus the number of veto windows used $N_{wt}$ approaches the number of triggers vetoed $N_{vt}$. Comparing Eqns \[eqn:EffDtRatio\] and \[eqn:UsedPerRatio\], this means we expect $R_{ED} \approx R_U$, as we see in this example with a value of 130.
For such loud transients, we expect all veto intervals to be used, but even for the loudest intervals we have tens of percent of the intervals that contain no cluster, as is the case with the aforementioned overflow flags. Of the 22% of the overflow veto intervals that are unused, 20 % are within a clustering time of a clustered trigger. These intervals, therefore, may well have many raw triggers, as the overflow on the lefthand side of Figure \[fig:H2trig\] did, but only the overflow with the loudest raw trigger was marked by a clustered trigger. Future efforts to automatically classify the effectiveness of vetoes (Section 5.1) would likely be sensitive to the anomalously low used percentages mentioned above, but for future searches the problem is significantly mitigated by employing a clustering window of 4 seconds rather than 10 seconds.
The efficiency versus $\rho$ of the veto interval is shown in Figure \[fig:H2eff\]. This rises rapidly with the minimum $\rho$ of the clustered triggers, and the efficiency to deadtime ratio reaches 1300 for clusters with $\rho$ above 50 and over 5000 for clusters with $\rho$ above 500. Efficiency and used percentage would be independent of $\rho$ if the times vetoed were random and uncorrelated with transient noises.
![The left plot shows the efficiency and used percentage as a function of a minimum threshold single-interferometer cluster $\rho$ for the H2 Length Sensing and Control Overflow veto. The dashed lines are the values for the data quality flag before window paddings are added. The solid lines are the values after windows are added to veto the associated triggers. The right plot shows a log-log histogram for the same veto. All clusters found in science mode are shown in solid lines and vetoed clusters are shown in dashed lines. []{data-label="fig:H2eff"}](figure3a.eps "fig:"){width="50.00000%"}![The left plot shows the efficiency and used percentage as a function of a minimum threshold single-interferometer cluster $\rho$ for the H2 Length Sensing and Control Overflow veto. The dashed lines are the values for the data quality flag before window paddings are added. The solid lines are the values after windows are added to veto the associated triggers. The right plot shows a log-log histogram for the same veto. All clusters found in science mode are shown in solid lines and vetoed clusters are shown in dashed lines. []{data-label="fig:H2eff"}](figure3b.eps "fig:"){width="50.00000%"}
An example of a data quality flag for an environmental transient that we used to make a veto was the flag marking the times of elevated seismic motion due to trains passing through Livingston. Early in S5, investigations of loud noise transients in L1 indicated that many such transients occurred in the minutes preceding loss of interferometer lock due to the passage of trains near the detector. At each LIGO site, seismometers are located in each major building. Since the trains pass closest to the end of the “Y” arm, the seismometer located there is most sensitive to the trains. Specifically, the train-induced seismic motion was most pronounced along the direction of the arm in the 1-3 Hz frequency band. Seismic disturbances due to trains were visible upon examining the 1-3 Hz band limited root mean square (BLRMS) value for minutes of the aforementioned seismometer channel. Setting a minimum seismic threshold in that channel of $0.75\mu m/s$ in the 1-3 Hz BLRMS to identify times of passing trains, the two to three trains per day that passed the interferometer were identified. Studies of the seismic motion induced by these trains compared with single interferometer online glitch monitoring codes [@Blackburn:2008ah] showed correlation for up to a minute before and after the minute of peak seismic amplitude for each train. Data quality flag vetoes were defined to mark these times.
For this example, our metrics then yielded a deadtime of 0.69 %, an efficiency of 2 % for all clusters and 20 % for clusters with $\rho_{thresh} \geq$ 100, and used percentage of 60 % and 32 % for each threshold respectively. $R_{ED}$ therefore increases from 3 to 30 with increasing $\rho_{thresh}$. $R_U$ for the same values of $\rho_{thresh}$ increases from 0.78 to 16. This veto is effective at eliminating a population of significant glitches, though not as loud or common as those from the overflows mentioned earlier. Techniques for combining together the information from these data quality vetoes is discussed in Section 4.5.
Auxiliary channel used percentage veto
--------------------------------------
Even after taking all DQ flags into account, the number of triggers left unflagged is still very much in excess of those expected from random noise. Another approach is also possible, using the KW triggers that are generated on important auxiliary channels.
The KW based veto method developed in S5 is similar to other KW based vetoes implemented in LIGO’s S1, S2, S3 and S4 CBC searches [@Vetoes; @vetoGWDAW03]. Comparisons were made between KW triggers in interferometer and environmental channels and the clustered CBC triggers. The KW significance can be used in the same way that $\rho$ is for CBC searches.
For each channel, the threshold on KW significance was initially chosen to be above the background exponentially distributed triggers from Gaussian noise but low enough to catch noise transients. This threshold was then incremented until a used percentage for the veto of at least 50% was achieved. This was chosen to ensure that the times identified were likely to contain transients. Veto intervals were then generated by taking intervals $\pm 1 s$ from the KW trigger times, and rounding away from the trigger to create intervals of $3s$ total duration. Only channels that achieved a used percentage of 50% were considered for veto use.
In the tuning of the veto, already diagnosed problematic times, as described in Sections 5.1 through 5.3, were excluded from consideration. Once each veto was defined, a list of time intervals to be excluded was created for all S5 Science mode data with the tuned parameters. The safety of each veto channel was determined using hardware injections, similar to the method used for data quality flags; those channels that produced a statistically significant overlap with hardware injection times were not implemented as vetoes. While the vetoes were defined via comparison with the clustered inspiral triggers, the vetoes required padding to ensure that the unclustered triggers associated with each transient were also vetoed. For channels that triggered coincident with large amplitude transients in the gravitational wave channel, we added $ 7 s$ veto window paddings to the beginning due to the plateau effect (Section 3) and end of the initial $3 s$ intervals. For each CBC search these KW based used percentage vetoes were tuned and defined with respect to the single interferometer triggers from the specific analysis.
In the first year of S5 there were a number of critical veto channels found in this manner. An important veto associated with an interferometer control channel was the feedback loop that keeps the H1 recycling cavity resonant. A veto based on environmental monitors came from the magnetometers located at the end of the Y-arm for L1.
Veto categorization
===================
The goal of using vetoes is to reduce the false alarm rate, in order to more accurately assess the whether gravitational wave candidates are true detections. Upon evaluating the available vetoes, we found that they do not all perform similarly, and divided them into categories. Well understood vetoes have a low probability of accidentally vetoing gravitational waves, and significantly reduce the background. More poorly understood vetoes can also reduce the background significantly, but with an increased chance of falsely dismissing actual gravitational waves. We classified the vetoes into categories in order to allow searches to choose between using only the well established vetoes or aggressively using more poorly understood vetoes.
Those vetoes classified as well understood almost always had higher $R_{ED}$ and $R_{U}$, as well as lower overall deadtime, than the less understood transients that correspondingly had less effective, longer intervals with poorer ratios. This was because when the mechanism behind a transient was well understood, such as an overflow in the digital control channels of the interferometer, it was easier to identify the specific times at which these transients occurred. In some cases, however, well understood vetoes may include little enough time that statistics are difficult to perform due to the small number involved, and these can still be categorized as well understood, providing sufficient evidence for coupling is present. Conversely, when only the general cause was known, as in the case of transients related to passing trains at the Livingston site, long intervals of time when these conditions were present needed to be vetoed in order to capture the related transients, despite the short duration of each particular transient. In this latter case, a statistical argument based on the veto metrics was required to prove the utility of a set of veto intervals.
The idea of this categorization scheme was to allow the followup of candidate triggers after applying sequentially each category of vetoes with the consequently lowered background false alarm rate, in order to search for detections [@Collaboration:2009tt; @Abbott:2009qj]. In the CBC searches in LIGO’s S5 science run, we decided on four categories in descending order of understanding of the problems involved:
Category 1
----------
The first category includes vetoed times when the detectors were not taking data in the design configuration. A fundamental list of science mode times is compiled for each interferometer, and only the data in these times is analyzed. These times are logged automatically by the detectors with high reliability, though on rare occasions DQ flags marking non-science mode data mistakenly marked as science quality need to be generated after the fact. These are the same for all searches, and do not need to be padded with extra windows, as the data are not analyzed.
Category 2
----------
The second category contains well understood vetoes with well tuned time intervals, low deadtimes and a firm model for the coupling into the gravitational wave channel. For many transients, this results in a high efficiency, particularly at high $\rho$, though this is not necessarily the determining factor in categorization. A well understood noise coupling into the gravitational wave channel may consistently produce triggers of moderate amplitude, or at a lower rate than more common transients. These are still considered to be of category 2 if several conditions are met. The ratios $R_{ED}$ and $R_U$ should be statistically significant, of order 10 or higher, for all clusters above some $\rho$ threshold characteristic of the transients.
![ A category 2 data quality veto for a month containing glitches in the TCS lasers of H2. At right, a log-log histogram of single interferometer clusters for the CBC search with total mass between 2 and 35 solar masses, with all clusters in blue and vetoed clusters in red. At left, the effect of the transient on the production of unclustered triggers (dots), the 10 second clusters of these raw triggers (circle), the original Data Quality flag (dashed lines), and the expanded data quality veto after the window paddings are applied (solid lines). []{data-label="fig:H2TCStrig"}](figure4a.eps "fig:"){width="50.00000%"}![ A category 2 data quality veto for a month containing glitches in the TCS lasers of H2. At right, a log-log histogram of single interferometer clusters for the CBC search with total mass between 2 and 35 solar masses, with all clusters in blue and vetoed clusters in red. At left, the effect of the transient on the production of unclustered triggers (dots), the 10 second clusters of these raw triggers (circle), the original Data Quality flag (dashed lines), and the expanded data quality veto after the window paddings are applied (solid lines). []{data-label="fig:H2TCStrig"}](figure4b.eps "fig:"){width="50.00000%"}
One example of a category 2 veto is the overflow veto mentioned in Section 4.3. As is clear from the right hand plot in Figure \[fig:H2eff\], these veto intervals include most of the loudest clusters. Not all category 2 vetoes need to have this level of efficiency, or any efficiency at all at the most extreme $\rho$. For instance, we also used vetoes based on data quality flags for glitches in the lasers for the thermal compensation system (TCS). TCS heats the mirrors in order to offset changes in curvature due to heating by the main laser. These flags, with 4 seconds added to the end of the original 2 second intervals, had a $R_{ED}$ ratio of nearly 500 for triggers above $\rho$ of 20, but zero efficiency above $\rho$ of 40. The $R_U$ for this veto was over 100. As is clear in Figure \[fig:H2TCStrig\], there is a population of clusters with $\rho$ from 20 to 40 that correspond to the transients from TCS glitches in this particular search.
Category 3
----------
The third category contains vetoes which were significantly correlated with transients, but with less understanding of the exact coupling mechanism, and thus often poorer performance in the metrics of deadtime and used percentage than category 2 vetoes. There are many sources of transient noises whose coupling is only partly understood. Site-wide events of significant duration, such as heavy winds or elevated seismic motion, intermittently lead to loud transient noises. The auxiliary channel vetoes discussed in Section 4.3, because of their statistical nature, fall also into this category. Category 3 vetoes also include the minutes immediately preceding the loss of lock of the interferometer, when the triggers were likely due to the same instabilities that contributed to the lock loss. These vetoes, based more on the probability of transients than a direct measurement, tend to have lower used percentages, higher deadtimes, and therefore smaller ratios between the efficiency and deadtime.
The train data quality flag veto mentioned in Section 4.2 was in this category, for while the trains themselves were well understood, the nonlinear coupling to create sporadic high frequency glitches was not. This caused large windows defined by the presence of heightened ground motion alone to be created, rather than targeted vetoes of the individual noise transients.
![ A category 3 data quality veto for a month of high winds at Hanford. A log-log histogram of single interferometer clusters, total in solid lines and vetoed in dashed lines. []{data-label="fig:H2Wind trig"}](figure5.eps){width="50.00000%"}
Another example was elevated winds above 30 Mpc at the Hanford site. This data quality veto had a $R_{ED}$ of 17, and a $R_U$ of 31. While this is significant, it is less than the typical value of category 2 vetoes.
By definition, the auxiliary channel vetoes have a large used percentage, always greater than 50%. However, they are classified as category 3 because the coupling is not well understood. An example of such a veto made using the above technique based on an interferometer control channel is the veto made from the H1 feedback loop that keeps the recycling cavity resonant. This veto had an $R_{ED}$ of 20. Another veto, this time based on an environmental channel, used the magnetometers located at the end of the L1 Y-arm, and had an $R_{ED}$ of 10.
Category 4
----------
The fourth category contains vetoes with low statistical significance, often with high deadtimes. The used percentages are often near 100%, but this is a representative of the long intervals of science mode time flagged, and thus the high probability that at least one cluster will be within the time defined (as mentioned earlier, the average clustered trigger rate was of order one per 10 minutes). Seismic flags with lower thresholds, aircraft passing within miles of the detectors, and problems recorded in the electronic logbooks at the detectors all fall within this category. These long intervals are not used as vetoes for searches, but rather are identified for the purposes of providing input to the follow up of gravitational wave candidates, when all possible factors that prompted the creation of these flags at the time of the detection candidate are scrutinized.
Examining candidate events after vetoing times
----------------------------------------------
![This diagram schematically represents the anticipated effect of vetoes on the significance of candidate events. The solid lines represent the estimated background coincidence rate from timeslides before and after vetoes. The circles denote the number of foreground coincidences with $\rho^2$ equal to or greater than the x axis value. For the purpose of the discussion in Section 5.5, the points A, B, C, and D are denote hypothetical detection candidates. []{data-label="fig:coincidentTriggers"}](figure6.eps){width="75.00000%"}
As mentioned earlier, candidate coincident events that occur during the times of category 2, 3, and 4 vetoes are not automatically discarded. The total deadtime of category 2 vetoes for the low mass CBC search, for example, was of order 1%. Category 3 had a deadtime of order 5%, and category 4 many times that. As the search reported no detections, it therefore included a calculation of the upper limit on the number of compact binary coalescences. The first three categories, including both data quality flag vetoes and auxiliary channel used percentage vetoes, were applied in this example search before calculating the upper limit, as they reduced the false alarm rate from these transients. Because the total veto deadtime of the applied categories was between 5% and 10% per interferometer, the probability that a true gravitational wave could be in a vetoed interval is significant.
The decisions on which vetoes to use are made prior to examination of candidates. While the veto choices were tuned on single interferometer triggers, the end product of the CBC searches are detection candidates found in coincidence in the data from multiple interferometers. In the rest of this section, we will discuss the effect that the veto categories have on coincident CBC searches.
While we are unwilling to precipitously remove all candidates in vetoed times from consideration, it is imperative to reduce the rate of accidental coincidences from the noise transients that have been identified. This can be done by examining all significant candidates with respect to the background present after each category is consecutively applied. If a candidate is not vetoed by successive veto categories, it becomes more significant, as more of the background false alarm candidates against which it is compared are removed. If a candidate is in vetoed time for a given category, it is not completely ruled out as a detection, but further investigation would be required to show such a candidate is not an artifact due to the disturbance that triggered the veto. Candidates that are vetoed by lower numbered categories are more suspect, given the firmer understanding of the vetoes that populate the first two categories. A true gravitational wave is not impossible, as a sufficiently nearby (within the Milky Way) binary system coalescence could theoretically overflow the feedback control systems. It would be apparent, however, in follow up investigations as the spectrogram for the data would show a large amplitude chirp signature leading up to the overflow, coherently between multiple detectors.
The diagram in Figure \[fig:coincidentTriggers\] illustrates the effect of vetoes that we anticipate on the significance of detection candidates. Superimposed are four hypothetical candidates with the labels A, B, C, and D. For illustration only, let us assume that there is a gravitational wave candidate at one of these points.
If the candidate is at point A, it is visible above background and significant before any vetoes are applied. If A is not vetoed after subsequent veto categories are applied, it will be the loudest candidate and significantly above the rest of the distribution, and thus a strong gravitational wave candidate. If A is vetoed, it would be plausible to believe it could still be significant with strong evidence that it did not originate from the same problems used to define the veto, as in the hypothetical example of the galactic coalescence mentioned above. Even if A is recovered, it would be compared to the background estimation before vetoes are applied, and thus have a lower significance than had it survived the vetoes originally.
If the candidate is at point B, it is visible above background, though not as significant as if it were at point A. If it is not vetoed, then B is a good candidate, and having cleared away the understood accidental triggers from transient noises, it can be followed up in depth. The reason for defining vetoes is precisely to uncover these candidates, which would be buried in the background otherwise. If it is vetoed, it is again necessary to confirm that the data artifacts prompting the veto intervals are not responsible for the candidate.
If the candidate is at point C, it is likely among the triggers with the largest $\rho^2$ before the vetoes are applied. Surviving the vetoes improves its ranking, reducing the background of triggers with equal or lower $\rho^2$. If it is vetoed, it is a problematic candidate given the population of spurious triggers that it sits in. If it survives the vetoes, it is still only a marginal candidate. Follow up analysis of the highest $\rho^2$ triggers will likely uncover reasons to distrust the surrounding loud candidates, but that is not enough to make C into a strong candidate. Additional veto definitions and revisions would make a candidate at C somewhat more significant, providing it is not vetoed, plausibly making it significant.
A candidate at point D is within the accidental population of triggers after vetoes are applied, with tens of triggers surrounding it with similar $\rho^2$. Such candidates are not detectable without additional reduction of the background. Additional veto definitions and revisions might make a candidate at D marginally more significant, but it is not at all likely to become detectable through veto efforts.
Results of veto efforts in S5
-----------------------------
In S5, hundreds of data quality flags and auxiliary channels were evaluated as mentioned above in Sections 4 and 5. The resulting veto metrics and categorization for each of the vetoes used in the searches were archived in a technical document [@S5VetoTechNote].
Proposed Veto Techniques
========================
The techniques mentioned in Section 4 were by and large refined months after the data were recorded. Additionally, the decisions on categorization, veto window padding, and utility of auxiliary channels for used percentage vetoes were determined largely by individual human examination of the behavior of each data quality flag and auxiliary channel. While this was necessary for development and early implementation, the low latency and rigor associated with automating as much of this decision making process as possible is desirable. Below are discussions of our current and near future efforts to realize the goal of automated evaluation, categorization, and extension of data quality and auxiliary channel vetoes.
Automated categorization using a $\chi^2$ test
----------------------------------------------
One promising method that has been developed, and that has the potential to help automate recommendations for veto categories, is a figure of merit based on a $\chi^2$ test. For each DQ flag this $\chi^2$ statistic is given by $$\chi^2(\rho)=\sum_{k=1}^{R}\frac{(n_k (\rho)-T_k\langle n_t(\rho)\rangle)^2}{T_k\langle
n_t(\rho)\rangle}\,,$$ where $\langle n_t(\rho)\rangle$ is the average number of triggers per unit time in the science run above a certain threshold $\rho$, $T_k$ is the duration of the flagged window $k$, and $n_k$ is the actual number of triggers above the threshold $\rho$ in the same window.
The null hypothesis is that the triggers are Poisson distributed, i.e. there is no correlation between the presence of triggers and the DQ flags. In our analysis we compute the figure of merit and test the null hypothesis at a confidence level of 95%. The higher the figure of merit, the higher the correlation between triggers and DQs and thus the lower the category.
As an example of the $\chi^2$ categorization scheme, the $\chi^2$ value for the H2 overflow mentioned previously is $\approx 100$ times higher than the typical ranges of category 3 vetoes and $\approx 1000$ times higher than the ranges typical of category 4 vetoes. For vetoes that have $\chi^2$ values that are near or on the boundaries between the categories we turn to figure of merits mentioned previously such as deadtime, used percentage, and efficiency to determine which category a particular veto belongs in. For example, the veto for glitches in the TCS lasers of H2 has a chi-squared value that falls near the lower range for category 2 and the higher range for category 3. However, the high used percentage and the low deadtime distinguish this veto from category 3 vetoes with similar $\chi^2$ values.
The $\chi^2$ method is a step towards automated categorization. An automated monitor that incorporates previously mentioned figures of merit and $\chi^2$ values to organize vetoes into categories is currently being developed.
Automated veto window padding determination
-------------------------------------------
The determination of window paddings (see Section 4.2) is another step of the veto selection pipeline that currently requires human input. A method to automate this step would be to look for quiet time intervals of pre-determined duration around the clusters. If the unclustered triggers remain below the minimum $\rho$ threshold for the chosen duration before and after the glitch’s loudest trigger, the earliest and latest triggers above the threshold would determine the left and right padding of the DQ window, respectively. Assuming the duration of the required padding for the DQ flag to be normal distributed, a final recommendation for the padding of that flag would be obtained by taking the average of the values for each window.
Multiple auxiliary channel veto algorithm
-----------------------------------------
Another approach that has been explored is that of defining vetoes when multiple auxiliary channels glitch coincidently, specifically by examining the output of the “QScan” time-frequency algorithm [@Chatterji:2004qg] over multiple auxiliary channels, at the time of detection candidates. When a number of auxiliary channels glitch simultaneously and at the same time as the gravity wave channel, there is a strong possibility that the glitching has a non-astrophysical cause, particularly when the channels are physically related. For example, a number of the length sensing and control channels may glitch together in different parts of the interferometer, such as the beam splitter and reflected and dark ports. Sometimes the glitches in a set of length sensing and control channels will be associated with glitches in the alignment sensing and control channels. Many of these measure pitch and yaw of mirrors, including the test masses, and when mirror alignment and length disturbances occur simultaneously, it is unlikely that a transient also in the gravity wave channel will be astrophysical. Nevertheless, safety studies have been successfully conducted, using all of the hardware injections from the first year of S5, to verify that these combinations of glitches could not be caused by the arrival of gravitational waves.
Tests of the efficacy of proposed vetoes were carried out on data from the S5 search with total mass between 25 and 100 solar masses. On outlier coincident triggers with $\rho \geq 200$ remaining after the application of existing veto categories 1 through 4, from 94 to 100% of the triggers would be vetoed for each single interferometer, and 100% of the coincident triggers would be vetoed in one or more interferometers. Since these vetoes are run on small time intervals around the times of the detection candidate triggers, the dead time is not calculated: a candidate survives or is vetoed.
Future work planned includes running this algorithm over the times of candidates using a larger set of the available channels, rather than only channels corresponding the length and alignment sensing and control, as restricted in the above studies to reduce computation time. It is expected that additional sets of channels will be useful, particularly the environmental channels.
Summary
=======
In this paper, we showed how we developed techniques for vetoing non-astrophysical transient noises safely and effectively, in order to reduce the effect of noise transients on astrophysical searches for low mass CBCs in the first year of the initial LIGO data run. We based our vetoes on data quality flags created by detector characterization work, as well as KW triggers from auxiliary channels with high used percentages. Though we approached each flag and each channel individually, and though different flags and different channels reflected a variety of specific causes, we found that the effects on the gravitational wave channel fell into a few common groupings. Flags and channels that responded to similar phenomena generally required similar windows, had similar deadtimes, were effective for similar populations of triggers, and therefore were placed in the same categories. The LSC used these categories for sequentially studying the significance of gravitational wave candidates rising above background in the CBC searches.
Going forward, we intend to use the experience gained to finish ongoing automation work both to select veto window paddings and to provide recommendations for veto categorization. There will be data quality flag and auxiliary channel based vetoes developed for use in LIGO’s S6 science run. The goal will be to analyze the auxiliary channel KW triggers in near real time, and to have vetoes defined on a week-by-week basis for both types of vetoes. It is probable that there will be marginal cases for categorization that require further human review, but automation will allow us to focus our time on these cases.
This work is partially supported by the National Science Foundation grants PHY-0457622 (NZ, TR), PHY-0553422 (NC, TI, MC, JC), PHY-0555406 (KR), PHY-0600259 (JRS), PHY-0605496 (GG, JS), PHY-0653550 (LC), PHY-0757937 (MC, BR), PHY-0757957 (PS), PHY-0847611(DAB), PHY-0854790(NC, TI, MC, JC), and PHY-0905184 (GG, JS).
References {#references .unnumbered}
==========
[10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Abbott B [*et al.*]{} (LIGO Scientific Collaboration) 2009 [ *Rept. Prog. Phys.*]{} [**72**]{} 076901 (*Preprint* )
Abbott B [*et al.*]{} (LIGO Scientific Collaboration) 2009 [ *Phys. Rev. D*]{} [**79**]{} 122001 (*Preprint* )
Abbott B [*et al.*]{} (LIGO Scientific Collaboration) 2009 [ *Phys. Rev. D*]{} [**80**]{} 047101 (*Preprint* )
Blackburn L [*et al.*]{} 2008 [*Class. Quant. Grav.*]{} [**25**]{} 184004 (*Preprint* )
Chatterji S, Blackburn L, Martin G and Katsavounidis E 2004 [*Class. Quant. Grav.*]{} [**21**]{} S1809–S1818 (*Preprint* )
Wainstein L A and Zubakov V D 1962 [*Extraction of signals from noise*]{} (Englewood Cliffs, NJ: Prentice-Hall)
Allen B, Anderson W G, Brady P R, Brown D A and Creighton J D E 2005 [FINDCHIRP: An algorithm for detection of gravitational waves from inspiraling compact binaries]{} (*Preprint* )
Owen B J and Sathyaprakash B S 1999 [*Phys. Rev. D*]{} [**60**]{} 022002
Allen B 2005 [*Phys. Rev. D*]{} [**71**]{} 062001
Abbott B [*et al.*]{} ([LIGO]{} Scientific Collaboration) 2008 [ *Phys. Rev. D*]{} [**77**]{} 062002 (*Preprint* )
Abbott B [*et al.*]{} ([LIGO Scientific Collaboration]{}) 2007 Tuning matched filter searches for compact binary coalescence Tech. Rep. [LIGO]{}-T070109-01 <http://www.ligo.caltech.edu/docs/T/T070109-01.pdf>
Brown D A 2004 [*Search for gravitational radiation from black hole [MACHOs]{} in the [G]{}alactic halo*]{} Ph.D. thesis University of Wisconsin–Milwaukee
Christensen N [*et al.*]{} ([LIGO]{} Scientific Collaboration) 2005 [*Class. Quant. Grav.*]{} [**22**]{} S1059–S1068
Abbott B P [*et al.*]{} (LIGO Scientific) 2009 [*Phys. Rev.*]{} [**D80**]{} 102001 (*Preprint* )
Christensen N, Shawhan P and Gonz[á]{}lez G 2004 [*Class. Quant. Grav.*]{} [**21**]{} S1747–S1755
Abbott B [*et al.*]{} ([LIGO Scientific Collaboration]{}) 2010 Data quality and veto choices of S5 lowmass CBC searches Tech. Rep. [LIGO]{}-T1000056-v2 <https://dcc.ligo.org/cgi-bin/DocDB/ShowDocument?docid=8982>
|
[ Astro2020 Science White Paper]{}
[Radio, Millimeter, Submillimeter Observations of the Quiet Sun]{}
**Thematic Areas:** $\square$ Planetary Systems $\square$ Star and Planet Formation $\square$ Formation and Evolution of Compact Objects $\square$ Cosmology and Fundamental Physics Stars and Stellar Evolution $\square$ Resolved Stellar Populations and their Environments $\square$ Galaxy Evolution $\square$ Multi-Messenger Astronomy and Astrophysics
**Principal Author:**\
Tim Bastian, National Radio Astronomy Observatory\
Email: tbastian@nrao.edu.edu\
Phone: (434) 296-0348\
**Co-authors:**\
Bin Chen, New Jersey Institute of Techology\
Dale E. Gary, New Jersey Institute of Technology\
Gregory D. Fleishman, New Jersey Institute of Technology\
Lindsay Glesener, University of Minnesota\
Colin Lonsdale, MIT/Haystack\
Pascal Saint-Hilaire, University of California, Berkeley\
Stephen M. White, Air Force Research Laboratory\
**Executive Summary** Identification of the mechanisms responsible for heating the solar chromosphere and corona remains an outstanding problem, one of great relevance to late-type stars as well. There has been tremendous progress in the past decade, largely driven by new instruments, new observations, and sophisticated modeling efforts. Despite this progress, gaps remain. We briefly discuss the need for radio coverage of the 3D solar atmosphere and discuss the requirements.
Introduction
============
An outstanding problem in solar physics and by extension, stellar physics, is how the dynamic chromosphere and corona are heated. The chromosphere and corona are only visible to the naked eye during solar eclipses, the chromosphere as brilliant, ruby-red ring of beads just above the occulted photosphere; the corona as a pearly white crown. The fundamental question is by what [*non-radiative*]{} mechanism(s) the temperature of the chromosphere is heated to $>10^4$ K, and how the corona is heated to several $\times 10^6$ K. Leading theoretical ideas for how the chromosphere and corona are heated involve some form of resonant wave heating (e.g., van Ballegooijen et al. 2011) or “nanoflares” (Parker 1988), with many variants of these ideas under study: see, for example, recent work (Priest et al. 2018, Syntelis et al. 2019) that explores a flux cancellation nanoflare model.
This white paper summarizes progress made during the past decade, particularly new observations and progress in modeling, and points out the need for enabling the unique diagnostic potential offered by broadband radio imaging spectropolarimetry in the coming decade.
New Instrumentation, New Models
===============================
Significant progress has been stimulated by new observations provided by a range of new observational capabilities that have come online during the past decade, with more soon to come. In addition, increasingly advanced models, motivated by new observations, have been developed with which to test our understanding. Large facilities and missions include the following:
[**Solar Dynamics Observatory (SDO)**]{}: The SDO, a previous decadal priority, was launched in 2010. It carries the Helioseismic and Magnetic Imager, producing high resolution photospheric magnetograms and dopplergrams; the Extreme Ultraviolet Variability Experiment, which makes precision measurements of the UV irradiance, which heats the Earth’s upper atmosphere and creates the ionosphere; and the Atmospheric Imaging Assembly, which produces high resolution images ($0.6"$/pixel)of the Sun in extreme UV wavelengths that sample temperatures from $5\times 10^4$ K to $20\times 10^6$K.
[**Interface Region Imaging Spectrograph (IRIS; De Pontieu et al. 2014)**]{}: IRIS is a small explorer that was launched in 2013. IRIS provides high resolution (0.35"), high cadence images and spectra of the solar chomosphere at UV wavelengths, sampling lines with temperatures ranging from $5000$ K to $65\times10^3$K.
[**Atacama Large Millimeter/Submillimeter Array (ALMA)**]{}: Although dedicated in 2012, it was not until 2016 that ALMA became available for solar observing at mm-$\lambda$ with arcsec imaging capabilities (Shimojo et al. 2017, White et al. 2017). Emission from the quiet Sun at mm-$\lambda$ originates from the low chromosphere. The source function is Planckian and since the emission is in the Rayleigh-Jeans regime, the observed flux density is linearly proportional to the kinetic temperature of the emitting plasma. ALMA observations are therefore highly complementary to those at ultraviolet wavelengths.
![Example of ALMA, IRIS, and SDO observations of a large sunspot. After Bastian et al. 2017.[]{data-label="fig:ALMA"}](ALMA_IRIS.pdf){width="\textwidth"}
And soon to come is:
[**Daniel K. Inouye Solar Telescope (DKIST)**]{}: In 2020 DKIST will become available as the flagship ground-based solar telescope optimized for O/IR performance. DKIST will provide ultra-high resolution imaging (0.1") of the solar photosphere and chromosphere at spectral lines that span the many scale heights of the chromosphere and will resolve magnetic elements on the smallest scales. It will, moreover, advance coronal magnetography.
These missions and facilities have been supplemented by several other ground based and sub-orbital initiatives. Examples include the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) program (e.g., Giono et al. 2017), the balloon-borne Sunrise experiment (Solanki et al. 2010); and the Goode Solar Telescope (Goode et al. 2010). All have opened new observing regimes that have enabled new diagnostics; and, as a result, have inevitably raised new questions. IRIS and ALMA have opened the ultraviolet (UV) and millimeter (mm) wavelength regimes for complementary high-resolution exploration of the solar chromosphere, the thin, dynamic, and complex interface between the visible photosphere and the hot solar corona. SDO has provided sustained, high-resolution and high-time-resolution records of both quiescent and active phenomena (flares, CMEs, jets, filament eruptions) in multiple passbands, providing new insights into the structure and dynamics of the thermal solar atmosphere.
On the modeling front, significant effort has gone into advanced numerical models of the dynamic chromosphere in light of high-resolution, time-resolved observations of UV lines by IRIS, lines that form under conditions of non-LTE. These go well beyond the semi-empirical hydrostatic models that served as references for years (e.g., Veranazza, Avrett, & Lowser 1981) to 3D radiative magnetohydrodynamic codes (e.g., the Bifrost model: Gudiksen et al. 2011; see also Carlsson et al. 2016 and references therein). For the corona and solar wind, the BATS-R-US (van der Holst 2014 and references therein) model has served as an innovation platform and reference.
A Gap in Radio Coverage
=======================
However, despite a wealth of new observations and modeling, fundamental questions remain: what are the dominant mechanisms that heat the solar atmosphere from chromospheric to coronal heights, what are the mechanisms of transport and loss, and how does the quiet atmosphere link to the expanding solar wind?
[r]{}[0.5]{} {width="3.in"}
A serious gap in observational coverage to date is the lack of sustained and comprehensive radio observations with the required frequency coverage, sensitivity, and angular resolution. Radio emission from the quiet Sun is dominated by thermal bremsstrahlung emission although nonthermal emission plays a role (microflares, jets, and other transient perturbations). In contrast to UV and EUV emission from the Sun (IRIS, SDO), the radiative transfer of radio and millimeter emission from the Sun is straightforward and serves as an important counterpoint to UV/EUV observations.[^1] The success of ALMA is notable, and it has already raised interesting challenges in this regard (e.g., Bastian et al. 2017).
It is important to note that ALMA only probes the solar chromosphere. To realize the potential of radio diagnostics, full radio coverage from the chromosphere to the corona is needed. Partial coverage is provided by non-solar-dedicated instruments. The [*Jansky Very Large Array*]{} provides imaging capabilities from 1-50 GHz, sampling the middle chromosphere into the low corona. Occassional mapping is performed at discrete frequencies by the VLA (e.g., Fig. 2). Work by Schonfeld et al. 2015, for example, explored the coronal sources of the 10.7 cm flux, a key proxy for EUV irradiance. But mapping of the full VLA frequency range on a regular basis is not possible.
The Murchison Widefield Array (Tingay et al. 2013) is a low frequency imaging array operating in Australia at meter wavelengths (70-300 MHz). The Sun and heliosphere are a key component of the MWA science program. It has rejuvenated meter-wavelength observations of the Sun with new findings on radio bursts and the quiet Sun. Sharma et al. (2018) find evidence for extremely weak nonthermal emission from the quiet corona using high-dynamic-range MWA observations. Coronal holes are regions in the solar atmosphere that are magnetically opent to the solar wind. They are the source of fast solar wind streams which, in turn, can drive recurrent geomagnetic activity. The structure and connection of CH to fast streams is of key interest. Rhaman et al. (2018) have recently shown that CH emission at meter wavelengths is dark relative to background corona at a wavelength $<2$ m, yet becomes brighter at loner wavelngths. McCauley et al. (2019) show that CH emission shows fascinating polarization signatures that are not yet understood (Fig. 3).
Requirements for Next-Generation Radio Instrumentation
======================================================
To exploit the unique sensitivity of radio observations to both thermal and nonthermal emissions from the “quiet” solar atmosphere and to trace the solar magnetic field from chromospheric heights up into the corona, the following instrumental requirements must be fulfilled:
- Frequency coverage: continuous coverage from 50 MHz (6 m) to 20 GHz (1.5 cm) to enable 3D imaging of the solar atmosphere
- Spectral resolution: a spectral resolution of $\Delta\nu/\nu\approx 1$%
- High time resolution: in order to observe weak microflares and smaller energy release events, a time resolution of 1 s is needed.
- High angular resolution: scattering in the Sun’s corona limits the usable angular resolution to roughly $20''/\nu_9$ where $\nu_9$ s the frequency in GHz, i.e., 1” at 20 GHz.
- High-dynamic-range imaging in any given frequency: a dynamic range of at least $10^3:1$ is needed at each frequency.
- Dual-polarization performance: Measurements of the total intensity are required as are those of the Stokes V parameter, which contains quantitative information about the magnetic field. The Faraday depth of the corona is very high, washing out linearly polarized emission.
Fulfillment of these requirements ensures that new constraints on the electron number density, temperature, and the magnetic field can be provided in 3D that are complementary to existing and planned missions and facilities.
These requirements are met by a next-generation radioheliograph that is known as the Frequency Agile Solar Radiotelescope (FASR), a facility that has been recommended as a priority by previous decadal surveys, both the Astronomy & Astrophysics decadals and the Solar & Space Physics decadals. As a mid-scale-sized project, funding mechanisms have not been available to move this priority project forward. Until now. With the implementation of NSF’s Mid-scale Research Infrastructure program, FASR can now be made a reality.
Bastian, T. S., Chintzoglou, G., De Pontieu, B., et al. 2017, [The Astrophysical Journal, Letters]{}, 845, L19 De Pontieu, B., Title, A. M., Lemen, J. R., et al. 2014, [Solar Physics]{}, 289, 2733 De Pontieu, B., De Moortel, I., Martinez-Sykora, J., & McIntosh, S. W. 2017, [The Astrophysical Journal, Letters]{}, 845, L18 Giono, G., Ishikawa, R., Narukage, N., et al. 2017, [Solar Physics]{}, 292, 57 Goode, P. R., Coulter, R., Gorceix, N., Yurchyshyn, V., & Cao, W. 2010, Astronomische Nachrichten, 331, 620 Go[š]{}i[ć]{}, M., de la Cruz Rodr[í]{}guez, J., De Pontieu, B., et al. 2018, [The Astrophysical Journal]{}, 857, 48 Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, [Solar Physics]{}, 275, 3 Priest, E. R., Chitta, L. P., & Syntelis, P. 2018, [The Astrophysical Journal, Letters]{}, 862, L24 Rahman, M. M., McCauley, P. I., & Cairns, I. H. 2019, [Solar Physics]{}, 294, 7 Sharma, R., Oberoi, D., & Arjunwadkar, M. 2018, [The Astrophysical Journal]{}, 852, 69 Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, [Solar Physics]{}, 275, 207 Schonfeld, S. J., White, S. M., Henney, C. J., Arge, C. N., & McAteer, R. T. J. 2015, [The Astrophysical Journal]{}, 808, 29 Shimojo, M., Iwai, K., Asai, A., et al. 2017, [The Astrophysical Journal]{}, 848, 62 Shimojo, M., Bastian, T. S., Hales, A. S., et al. 2017, [Solar Physics]{}, 292, 87 Solanki, S. K., Barthol, P., Danilovic, S., et al. 2010, Astrophys. J. Letts., 723, L127 Solanki, S. K., Riethmüller, T. L., Barthol, P., et al. 2017, Astrophys. J. Supplement, 229, 2 Syntelis, P., Priest, E. R., & Chitta, L. P. 2019, [The Astrophysical Journal]{}, 872, 32 Testa, P., De Pontieu, B., Allred, J., et al. 2014, Science, 346, 1255724 Tingay, S. J., Goeke, R., Bowman, J. D., et al. 2013, [Publications of the Astronomical Society of Australia]{}, 30, e007 van Ballegooijen, A. A., Asgari-Targhi, M., Cranmer, S. R., & DeLuca, E. E. 2011, [The Astrophysical Journal]{}, 736, 3 van der Holst B., Sokolov I. V., Meng X. et al 2014 ApJ 782 81 bibitemWedemeyer, S., Bastian, T., Braj[š]{}a, R., et al. 2016, [Space Science Reviews]{}, 200, 1 White, S. M., Iwai, K., Phillips, N. M., et al. 2017, [Solar Physics]{}, 292, 88 Woods, T. N., Eparvier, F. G., Hock, R., et al. 2012, [Solar Physics]{}, 275, 115 Wootten, A., & Thompson, A. R. 2009, IEEE Proceedings, 97, 1463
[^1]: The degree of ionization can depart significantly from the expectations of statistical ionization equilibrium in the solar chromosphere. Therefore, while the source function is in LTE, the opacity can be far from its LTE value.
|
---
abstract: |
We consider some special classes of Lévy processes with no gaussian component whose Lévy measure is of the type $\pi(dx)=e^{\gamma
x}\nu(e^x-1)\,dx$, where $\nu$ is the density of the stable Lévy measure and $\gamma$ is a positive parameter which depends on its characteristics. These processes were introduced in [@CC] as the underlying Lévy processes in the Lamperti representation of conditioned stable Lévy processes. In this paper, we compute explicitly the law of these Lévy processes at their first exit time from a finite or semi-finite interval, the law of their exponential functional and the first hitting time probability of a pair of points.\
[Key words and phrases]{}: Positive self-similar Markov processes, Lamperti representation, conditioned stable Lévy processes, first exit time, first hitting time, exponential functional.\
MSC 2000 subject classifications: 60 G 18, 60 G 51, 60 B 52.
---
[Some explicit identities associated with positive\
self-similar Markov processes.]{}\
[^1]
Introduction
============
In recent years there has been a general recognition that Lévy processes play an ever more important role in various domains of applied probability theory such as financial mathematics, insurance risk, queueing theory, statistical physics or mathematical biology. In many instances there is a need for explicit examples of Lévy processes where tractable mathematical expressions in terms of the characteristics of the underlying Lévy process may be used for the purpose of numerical simulation. Depending on the problem at hand, particular functionals are involved such as the first entrance times and overshoot distributions.
In this paper, we exhibit some special classes of Lévy processes for which we can compute explicitly the law of the position at the first exit time of an interval, the two points hitting probability and the exponential functional. Moreover, two new, concrete examples of scale functions for spectrally one sided processes will fall out of our analysis.
Known examples of overshoot distributions concern essentially (some particular classes) of strictly stable processes and processes whose jumps are of a compound Poisson nature with exponential jumps (or slightly more generally whose jump distribution has a rational Fourier transform). For example, let us state the solution of the two sided exit problem for completely asymmetric stable processes. In that case we take $(X,\mathbf{P}_x)$, $x\in \mathbb{R}$, to be a spectrally positive Lévy stable process with index $\alpha\in(1,2)$ starting from $x$. Let $\sigma^+_a = \inf\{t>0 :
X_t
>a \}$ and $\sigma^-_0 = \inf\{t>0 : X_t <0\}$. It is known (cf. Rogozin [@ro]) that for $y>0$, $$\mathbf{P}_x \Big(X_{\sigma^+_a}-a\in {\textrm{d}}y; \sigma^+_a <
\sigma^-_0\Big) = \frac{\sin\pi(\alpha-1)} {\pi}\left(\frac{a-x}{x
y}\right)^{(\alpha-1)} \frac{{\textrm{d}}y}{(y+a)(y+a-x)}.$$ For the case of processes whose jumps are of a compound Poisson nature with exponential jumps, the overshoot distribution is again exponentially distributed; see Kou and Wang [@KW]. See also Lewis and Mordecki [@LM] and Pistorius [@P] for the more general case of a jump distribution with a rational Fourier transform and for which the overshoot distribution belongs to the same class as the respective jump distribution of the underlying Lévy process.
The exponential functional of a Lévy process, $\xi$, i.e. $$\int_0^\infty\exp\Big\{-\xi_s\Big\}\,{\textrm{d}}s ,$$ also appears in various aspects of probability theory, such as: self-similar Markov processes, random processes in random environment, fragmentation processes, mathematical finance, Brownian motion on hyperbolic spaces, to name but a few. In general, the distribution of exponential functionals can be rather complicated. Nonetheless, it is known for the case that $\xi$ is either: a standard Poisson processes, Brownian motion with drift and a particular class of spectrally negative Lévy processes of bounded variation whose Laplace exponent is of the form $$\psi(q)=\frac{q(q+1-a)}{b+q}, \qquad q\geq 0,$$ where $0<a<1<a+b$. See Bertoin and Yor [@BY] for an overview on this topic.
The class of Lévy processes that we consider in this paper do not fulfill a scaling property and may have two-sided jumps. Moreover, they have no Gaussian component and their Lévy measure is of the type $\pi(dx)=e^{\gamma x}\nu(e^x-1)\,dx$, where $\nu$ is the density of the stable Lévy measure with index $\alpha\in(0,2)$ and $\gamma$ is a positive parameter which depends on its characteristics. It is not difficult to see that the latter Lévy measure has a density which is asymptotically equivalent to that of an $\alpha$-stable process for small $|x|$ and has exponential decay for large $|x|$. This implies that such processes have paths which are of bounded or unbounded variation accordingly as $\alpha\in(0,1)$ and $\alpha\in
[1,2)$ respectively. Further, they also have exponential moments. Special families of tempered stable processes, also known as CGMY processes, are classes of Lévy processes with similar properties to the aforementioned which have enjoyed much exposure in the mathematical finance literature as instruments for modelling risky assets. See for example Carr et al. [@CGMY], Boyarchenko and Levendorskii [@boy], Cont [@cont] or Schoutens [@Sch]. Although the Lévy processes presented in this paper are not tempered stable processes, it is intriguing to note that they possess properties which have proved to be popular for financial models but now with the additional luxury that they come with a number of explicit fluctuation identities.
We conclude the introduction with a brief outline of the remainder of the paper. The next section introduces the classes of processes which are concerned in this study. In section 3, we give the law of the position at the first exit time from a (semi-finite) interval. In section 4 we compute explicitly the two point hitting probability and in section 5, we study the law of the exponential functional of Lévy-Lamperti processes.
Preliminaries on Lévy-Lamperti processes {#prelim}
========================================
Denote by $\mathcal{D}$ the Skorokhod space of $\mathbb{R}$-valued càdlàg paths and by $X$ the canonical process of the coordinates on $\mathcal{D}$. Positive ($\mathbb{R}_+$-valued), self-similar Markov processes $(X,{\Bbb{P}}_x)$, $x>0$, are strong Markov processes with paths in $\mathcal{D}$, which fulfill a scaling property, i.e. there exists a constant $\alpha > 0$ such that for any $b>0$: $$\label{scale}
\mbox{\it The law of $\;(bX_{b^{-\alpha}t},\,t\ge0)$ under ${\Bbb{P}}_x$ is
${\Bbb{P}}_{bx}$.}$$ We shall refer to these processes as pssMp. According to Lamperti [@La], any pssMp up to its first hitting time of 0 may be expressed as the exponential of a Lévy process, time changed by the inverse of its exponential functional. More formally, let $(X,{\Bbb{P}}_x)$ be a pssMp with index $\alpha>0$, starting from $x>0$, set $$S = \inf \{t>0 : X_t =0\}$$ and write the canonical process $X$ in the following form: $$\label{lamp}
X_t=x\exp\left\{\xi_{\tau(tx^{-\alpha})}\right\} \qquad 0\le t<S\,,$$ where for $t<S$, $$\tau(t) = \inf \left\{s\geq 0 : \int_0^s
\exp\left\{\alpha\xi_u\right\} {\textrm{d}}u \geq t\right\}.$$ Then under ${\Bbb{P}}_x$, $\xi=(\xi_t,\;t\geq 0)$ is a Lévy process started from $0$ whose law does not depend on $x>0$ and such that:
- if ${\Bbb{P}}_x(S=+\infty)=1$, then $\xi$ has an infinite lifetime and $\displaystyle\limsup_{t\rightarrow+\infty}\xi_t=+\infty$, ${\Bbb{P}}_x$-a.s.,
- if ${\Bbb{P}}_x(S<+\infty,\,X(S-)=0)=1$, then $\xi$ has an infinite lifetime and $\displaystyle\lim_{t\to\infty} \xi_t = -\infty$, ${\Bbb{P}}_x$-a.s.,
- if ${\Bbb{P}}_x(S<+\infty,\,X(S-)>0)=1$, then $\xi$ is killed at an independent exponentially distributed random time with parameter $\lambda>0$.
As it is mentioned in [@La], the probabilities ${\Bbb{P}}_x(S=+\infty)$, ${\Bbb{P}}_x(S<+\infty,\,X(S-)=0)$ and ${\Bbb{P}}_x(S<+\infty,\,X(S-)>0)$ are 0 or 1 independently of $x$, so that the three classes presented above are exhaustive. Moreover, for any $t < \int_0^{\infty}\exp\{\alpha\xi_s\}\,{\textrm{d}}s$, $$\label{1664}\tau(t)=\int_0^{x^\alpha t}\frac{{\textrm{d}}s}
{(X_s)^\alpha}\,,\;\;\; {\Bbb{P}}_x-\mbox{a.s.}$$ Therefore (\[lamp\]) is invertible and yields a one to one relation between the class of pssMp’s killed at time $S$ and the one of Lévy processes.\
Now let us consider three particular classes of pssMp. (We refer to [@CC] for more details in what follows.) The first one is identified as a stable Lévy processes killed when it first exits from the positive half-line. In particular, if $\mathbf{P}_x$ is the law of a stable Lévy process with index $\alpha$ (or $\alpha$-stable process for short) initiated from $x>0$ with $\alpha\in(0,2]$, then with $T=\inf\{t:X_t\le0\}$, under $\mathbf{P}_x$, the process $$X_t{\mbox{\rm 1\hspace{-0.04in}I}}_{\{t<T\}}$$ is a pssMp which satisfies condition $(ii)$ if it has no negative jumps or $(iii)$ if it has negative jumps. We call $\xi^*$ the Lévy process (with finite or infinite lifetime) resulting from the Lamperti representation of the killed stable process. The characteristic exponent of $\xi^*$ has been computed in [@CC] and is given by $$\label{phistar}
\Phi^*(\lambda)=ia^*\lambda+\int_{\mathbb{R}}[e^{i\lambda
x}-1-i\lambda(e^x-1){\mbox{\rm 1\hspace{-0.04in}I}}_{\{|e^x-1|<1\}}]\pi^*(x)\,{\textrm{d}}x-c_{-}\alpha^{-1}\,, \;\;\;\lambda\in\mathbb{R}\,,$$ where $a^*$ is a constant, $$\pi^*(x)=\frac{c_+e^x}{(e^x-1)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x>0\}}+
\frac{c_-e^x}{(1-e^x)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x<0\}}\,,$$ and $c_-$, $c_+$ are nonnegative constants such that $c_-c_+>0$. Note that the Lévy measure of $\xi^*$ satisfies $\pi^*(x)=e^x\nu(e^x-1)$, where $\nu$ is the density of the stable Lévy measure with index $\alpha$ and symmetry parameters $c_-$ and $c_+$.
The second class is that of stable processes conditioned to stay positive. ( See for instance in [@Ch] for an overview of such processes.) A process in this class is the result of a Doob $h$-transform with $h(x) =
x^{\alpha\rho}$ and $\rho=\mathbf{P}_0(X_1<0)$. More precisely, $h$ is invariant for the killed process mentioned above $(X_t{\mbox{\rm 1\hspace{-0.04in}I}}_{\{t<T\}},\mathbf{P}_x)$ and the law ${\Bbb{P}}^\uparrow_x$ defined on each $\sigma$-field $\mathcal{F}_t$ generated by the canonical process up to time $t$ by $$\label{uparrow}
\left.\frac{{\textrm{d}}\mathbb{P}^\uparrow_x}{{\textrm{d}}\mathbf{P}_x}\right|_{\mathcal{F}_t}
= \frac{X^{\alpha\rho}_t}{x^{\alpha\rho}}\mathbf{1}_{\{t<T\}}$$ is this of a pssMp which derives toward $+\infty$ (in particular it satisfies condition $(i)$). Then the underlying Lévy process, which will be denoted by $\xi^\uparrow$, is such that $$\lim_{t\rightarrow+\infty}\xi^\uparrow_t=+\infty,\qquad
\textrm{a.s.,}$$ and from [@CC] its characteristic exponent is $$\label{phiup}
\Phi^\uparrow(\lambda)=ia^\uparrow\lambda+\int_{\mathbb{R}}[e^{i\lambda
x}-1-i\lambda(e^x-1){\mbox{\rm 1\hspace{-0.04in}I}}_{\{|e^x-1|<1\}}]\pi^\uparrow(x)\,{\textrm{d}}x\,,
\;\;\;\lambda\in\mathbb{R}\,,$$ where $a^\uparrow$ is a real constant and $$\pi^\uparrow(x)=\frac{c_+e^{(\alpha\rho+1)x}}{(e^x-1)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x>0\}}+
\frac{c_-e^{(\alpha\rho+1)x}}{(1-e^x)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x<0\}}\,.$$
The third class of pssMp that we will consider is that of stable processes conditioned to hit 0 continuously. Processes in this class are again it is defined as a Doob $h$-transform with respect to the function $h'(x)=\alpha\rho
x^{\alpha\rho-1}$ which is also invariant for the killed process $(X_t{\mbox{\rm 1\hspace{-0.04in}I}}_{\{t<T\}},\mathbf{P}_x)$. Then the law ${\Bbb{P}}^\downarrow_x$ which is defined on each $\sigma$-field $\mathcal{F}_t$ by $$\label{downarrow}
\left.\frac{{\textrm{d}}\mathbb{P}^\downarrow_x}{{\textrm{d}}\mathbf{P}_x}\right|_{\mathcal{F}_t}
=
\frac{X^{\alpha\rho-1}_t}{x^{\alpha\rho-1}}\mathbf{1}_{\{t<T\}}$$ is this of a pssMp who hits 0 in a continuous way, i.e. $(X,{\Bbb{P}}_x^\downarrow)$ satisfies condition $(ii)$. Let $\xi^\downarrow$ by the underlying Lévy process in the Lamperti representation of this process, then $$\lim_{t\rightarrow+\infty}\xi^\downarrow_t=-\infty\qquad \textrm{a.s.},$$ and the characteristic exponent of $\xi^\downarrow$ is given by $$\label{phiup}
\Phi^\downarrow(\lambda)=ia^\downarrow\lambda+\int_{\mathbb{R}}[e^{i\lambda
x}-1-i\lambda(e^x-1){\mbox{\rm 1\hspace{-0.04in}I}}_{\{|e^x-1|<1\}}]\pi^\downarrow(x)\,{\textrm{d}}x\,,
\;\;\;\lambda\in\mathbb{R}\,,$$ where $a^\downarrow$ is a constant and $$\pi^\downarrow(x)=\frac{c_+e^{\alpha\rho x}}{(e^x-1)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x>0\}}+
\frac{c_-e^{\alpha\rho x}}{(1-e^x)^{\alpha+1}}{\mbox{\rm 1\hspace{-0.04in}I}}_{\{x<0\}}\,.$$ Note that the constants $a^*$, $a^\uparrow$ and $a^\downarrow$ are computed explicitly in [@CC] in terms of $\alpha$, $\rho$, $c_-$ and $c_+$. Actually the process $\xi^\downarrow$ corresponds to $\xi^\uparrow$ conditioned to drift toward $-\infty$ (or equivalently $\xi^\uparrow$ is $\xi^\downarrow$ conditioned to drift to $+\infty$). We will sometime use this relationship which is stated in a more formal way the next proposition. In the sequel, $P$ will be a reference probability measure on ${\cal D}$ under which $\xi^*$, $\xi^\uparrow$ and $\xi^\downarrow $ are Lévy processes whose respective laws are defined above.
\[4590\] For every $t\ge0$, and every bounded measurable function $f$, $$E[f(\xi^\uparrow_t)]=
E[\exp({\xi_t^\downarrow})f(\xi^\downarrow_t)]\,.$$ In particular, processes $-\xi^\uparrow$ and $\xi^\downarrow$ satisfy Cramer’s condition: $E(\exp{-\xi_1^\uparrow})=1$ and $E(\exp{\xi_1^\downarrow})=1$.
[*Proof*]{}. Let $f$ be as in the statement. From (\[uparrow\]) and (\[downarrow\]), we deduce that for every $\mathbb{P}^\downarrow_x$-a.s. finite $({\cal F}_u)$-stopping time $U$, $$\label{3521}
x\mathbb{E}^\uparrow_x[f(X_U)]=
\mathbb{E}^\downarrow_x[X_Uf(X_U)]\,.$$ Let $t\ge0$. By applying (\[3521\]) to the $({\cal F}_u)$-stopping time $$x^{\alpha}\inf\Big\{u:\tau(u)>t\Big\},$$ which is $\mathbb{P}^\downarrow_x$-a.s. finite, and using (\[lamp\]) (note that $\tau(u)$ is continuous and increasing), we obtain $$E[f(\xi^\uparrow_t)]=
E[\exp(\xi_t^\downarrow)f(\xi^\downarrow_t)]\,,$$ which is the desired result. We refer to Rivero [@ri], IV.6.1 for a similar discussion on conditioned stable processes considered as pssMp. In the sequel we call $\xi^*$, $\xi^\uparrow$ and $\xi^\downarrow$ the [*Lévy-Lamperti processes*]{}. We now compute the law of some of their functionals.
Entrance laws for Lévy-Lamperti processes: intervals
====================================================
In this section, by studying the two sided exit problems for $\xi^\uparrow$, $\xi^*$ and $\xi^\downarrow$, we shall obtain a variety of new identities including the identification of two new scale functions in the case of one-sided jumps.
To this end, we shall start with a generic result pertaining to any positive self-similar Markov process $(X,\mathbb{P}_x)$, for $x>0$. Recall that $P$ is the reference probability measure on $D$. Let $\xi$ be a Lévy process starting from $0$, under $P$, with the same law as the underlying Lévy process associated to $(X,\mathbb{P}_x)$. For any $y\in\mathbb{R}$ let $$T^+_y=\inf\{t:\xi_t\ge y\}\;\;\mbox{and}\;\;T_y^-=\inf\{t:\xi_t\le
y\}\,,$$ and for any $y>0$ let $$\sigma^+_y=\inf\{t:X_t\ge
y\}\;\;\mbox{and}\;\;\sigma_y^-=\inf\{t:X_t\le y\}.$$
\[generic\]Fix $-\infty< v<0<u<\infty$. Suppose that $A$ is any interval in $[u,\infty)$ and $B$ is any interval in $(-\infty, v]$. Then, $$P\Big(\xi_{T^+_u}\in A; T^+_u< T^-_v\Big) =
\mathbb{P}_1\Big(X_{\sigma^+_{e^u}} \in e^A; \sigma^+_{e^u}< \sigma^-_{e^v}\Big)$$ and $$P\Big(\xi_{T^-_v}\in B; T^+_u> T^-_v\Big) =
\mathbb{P}_1\Big(X_{\sigma^-_{e^v}} \in e^B; \sigma^+_{e^u}>
\sigma^-_{e^v}\Big).$$
The proof is a straightforward consequence of the Lamperti representation (\[lamp\]) and is left as an exercise. Although somewhat obvious, this lemma indicates that for the three processes $\xi^\uparrow$, $\xi^*$ and $\xi^\downarrow$, we need to understand how, respectively, an $\alpha$-stable process conditioned to stay positive, an $\alpha$-stable process killed when it exits the positive half-line and an $\alpha$-stable process conditioned to hit the origin continuously, exit a positive interval around $x>0$. Fortunately this is possible thanks to a result of Rogozin [@ro] who established the following result for $\alpha$-stable processes.
\[rogozin\] Suppose that $(X,\mathbf{P}_x)$ is an $\alpha$-stable process, initiated from $x$, which has two sided jumps. Denoting $\rho
=\mathbf{P}_0(X_1 <0)$ we have for $a>0$ and $x\in (0,a)$, $$\begin{aligned}
&&\hspace{-1cm}\mathbf{P}_x\Big(X_{\sigma^+_a} - a \in {\textrm{d}}y; \sigma^+_a < \sigma^-_0\Big) \\
&&\hspace{1cm}= \frac{\sin\pi\alpha(1-\rho)}{\pi} (a-
x)^{\alpha(1-\rho)}x^{\alpha\rho}y^{-\alpha(1-\rho)}(y+a)^{-\alpha\rho}(y+a-x)^{-1}{\textrm{d}}y\end{aligned}$$
Note that an expression for $ \mathbf{P}_x(-X_{\sigma^-_0}
\in {\textrm{d}}y; \sigma^+_a > \sigma^-_0)$ can be derived from the above expression by replacing $x$ by $a-x$ and $\rho$ by $1-\rho$.
In the sequel, with an abuse of notation, we will denote by $T^+_y$ and $T^-_y$ for the first passage times above and below $y\in {\mbox{\rm I\hspace{-0.02in}R}}$, respectively, of the processes $\xi^\uparrow, \xi^*$ or $\xi^\downarrow$ depending on the case that we are studying.
We now proceed to split the remainder of this section into three subsections dealing with the two sided exit problem and its ramifications for the three processes $\xi^\uparrow, \xi^*$ and $\xi^\downarrow$ respectively.
Calculations for $\xi^\uparrow$ {#xiup}
-------------------------------
The two sided exit problem for $\xi^\uparrow$ can be obtained from Lemma \[generic\] and Theorem \[rogozin\] as follows. We give the case for two-sided jumps. Note that this is not a restriction as the two-sided exit functionals we consider are weakly continuous in the Skorokhod space. Therefore by taking limits as $\alpha(1-\rho)\rightarrow 1$ or $\alpha\rho\rightarrow 1$ we deduce identities for the case that $\xi^\uparrow$ is spectrally negative and spectrally positive respectively. Note that necessarily in the spectrally one sided case $\alpha\in(1,2)$.
Fix $\theta\geq 0$ and $-\infty<v<0<u<\infty$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^\uparrow_{T^+_u}- u\in {\textrm{d}}\theta; T^+_u< T^-_v\Big) \\
&&=
\frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}(1-e^v)^{\alpha\rho}\\
&&\hspace{1cm}\times
(e^{u+\theta} )^{\alpha\rho +1}(e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}(e^{u+\theta}
-e^v)^{-\alpha\rho}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^\uparrow_{T^-_v}\in {\textrm{d}}\theta; T^+_u> T^-_v\Big) \\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}(e^u -1)^{\alpha(1-\rho)}\\
&&\hspace{1cm}\times
( e^{v-\theta} )^{\alpha\rho+1}(e^{v} -e^{v-\theta})^{ -\alpha\rho}
(e^u -e^{v-\theta})^{-\alpha(1-\rho)}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
[*Proof.*]{} Recall that $(X,\mathbf{P}_1)$ denotes an $\alpha$-stable process initiated from $1$ and that $(X,\mathbb{P}^\uparrow_1)$ is an $\alpha$-stable process conditioned to stay positive initiated from $1$. From Lemma \[generic\], we have for $\theta\geq 0$ $$\begin{aligned}
&&\hspace{-1cm} P\Big(\xi^\uparrow_{T^+_u}\leq u+\theta; T^+_u< T^-_v\Big)\\
&&=\mathbb{P}^\uparrow_1\Big(X_{\sigma^+_{e^u}}\in[e^u, e^{u+\theta}]; \, \sigma^+_{e^u}< \sigma^-_{e^v}\Big)\\
&&=\int_{0}^{e^{u+\theta}
-e^u}(y+e^u)^{\alpha\rho}\mathbf{P}_1\Big(X_{\sigma^+_{e^u}} -e^u\in
{\textrm{d}}y;
\sigma^+_{e^u}< \sigma^-_{e^v}\Big)\\
&&=\int_{0}^{e^{u+\theta} -e^u}(y+e^u)^{\alpha\rho}\mathbf{P}_{1-e^v}
\Big(X_{\sigma^+_{(e^u-e^v)}} -(e^u-e^v)\in {\textrm{d}}y; \sigma^+_{(e^u-e^v)}< \sigma^-_{0}\Big)\\
&&=\frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}(1-e^v)^{\alpha\rho}\\
&&\hspace{1cm}\times\int_{0}^{e^{u+\theta} -e^u} (y+ e^u
)^{\alpha\rho}y^{ -\alpha(1-\rho)}(y+e^u -e^v)^{-\alpha\rho}(y+ e^u
-1)^{-1}{\textrm{d}}y\end{aligned}$$ from which the first part of the theorem follows.
The second part of the theorem can be proved in a similar way. Indeed for $\theta \geq 0$ $$\begin{aligned}
&&\hspace{-1cm} P\Big(\xi^\uparrow_{T^-_v}\geq v-\theta; T^+_u> T^-_v\Big)\\
&&=\mathbb{P}^\uparrow_1\Big(X_{\sigma^-_{e^v}}\in[e^{v-\theta}, e^{v}]; \, \sigma^+_{e^u}> \sigma^-_{e^v}\Big)\\
&&=\int_{0}^{e^{v}
-e^{v-\theta}}(e^v-y)^{\alpha\rho}\mathbf{P}_1\Big(e^v-X_{\sigma^-_{e^v}}
\in {\textrm{d}}y;
\sigma^+_{e^u}> \sigma^-_{e^v}\Big)\\
&&=\int_{0}^{e^{v} -e^{v-\theta}
}(e^v-y)^{\alpha\rho}\mathbf{P}_{1-e^v}\Big(-X_{\sigma^-_{0}} \in
{\textrm{d}}y;
\sigma^+_{(e^u-e^v)}> \sigma^-_{0}\Big)\\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}(e^u -1)^{\alpha(1-\rho)}\\
&&\hspace{1cm}\times\int_{0}^{e^{v} -e^{v-\theta}} ( e^v-y
)^{\alpha\rho}y^{ -\alpha\rho}(y+e^u
-e^v)^{-\alpha(1-\rho)}(y+1-e^v)^{-1}{\textrm{d}}y.\end{aligned}$$ This completes the proof.
Note that since the process $(X,\mathbb{P}^\uparrow_x)$, $x>0$, is an $\alpha$-stable process conditioned to stay positive, it follows that, in the case that there are two-sided jumps, there is no creeping out of the interval $(v,u)$ with probability one. That is to say. $$P\Big(\xi^\uparrow_{T^+_u}= u; T^+_u< T^-_v\Big)
=P\Big(\xi^\uparrow_{T^-_v}= v; T^+_u> T^-_v\Big) =0.$$
Taking $v\downarrow-\infty$ in the first part of the above theorem and $u\uparrow\infty$ in the second part we obtain the solution to the one-sided exit problem as follows.
\[onesideduparrow0\] Fix $\theta\geq 0$ and $-\infty<v<0<u<\infty$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^\uparrow_{T^+_u}- u\in {\textrm{d}}\theta, T^+_u <\infty\Big) \\
&&= \frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}
e^{u+\theta} (e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^\uparrow_{T^-_v}\in {\textrm{d}}\theta; T^-_v<\infty\Big) \\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}
( e^{v-\theta} )^{\alpha\rho+1}(e^{v}
-e^{v-\theta})^{ -\alpha\rho}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
To give some credibility to these identities, and for future reference, let us check that we may recover the identitiy in Caballero and Chaumont [@CC] for the law of the minimum.
Let $\underline{\xi}^\uparrow_\infty=\displaystyle\inf_{t\ge
0}\xi^\uparrow_t$. For $z\geq 0$, $$P\Big(-\underline{\xi}^\uparrow_\infty \leq z\Big) =
(1-e^{-z})^{\alpha\rho}.$$
[*Proof*]{}. The required probability may be identified as equal to $ P(T^-_{-z}=\infty)$ and hence, since there is no probability of creeping over the level $-z$, $$\begin{aligned}
&&P\Big(-\underline{\xi}^\uparrow_\infty \leq z\Big)\\
&&= 1-\frac{\sin\pi\alpha\rho}{\pi} (1-e^{-z})^{\alpha\rho}
\int_0^\infty ( e^{-z-\theta} )^{\alpha\rho+1}(e^{-z}
-e^{-z-\theta})^{ -\alpha\rho}(1-e^{-z-\theta})^{-1}{\textrm{d}}\theta\\
&&=1-\frac{\sin\pi\alpha\rho}{\pi} (1-e^{-z})^{\alpha\rho}
\int_0^\infty ( e^{-\theta} )^{\alpha\rho+1}
(1-e^{-\theta})^{-\alpha\rho}(e^z-e^{-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$ Next note that the integral in the right hand side satisfies $$\begin{aligned}
&&\hspace{-1cm} \int_0^\infty ( e^{-\theta} )^{\alpha\rho+1}
(1 -e^{-\theta})^{ -\alpha\rho}(e^z-e^{-\theta})^{-1}{\textrm{d}}\theta\notag\\
&&=\int_1^\infty \frac{e^{-z}}{y(y-1)^{\alpha\rho} (y-e^{-z})}{\textrm{d}}y\notag\\
&&=\int_0^\infty \frac{e^{-z}}{(u+1)u^{\alpha\rho} (u+1-e^{-z})}{\textrm{d}}u\notag\\
&&=\int_0^\infty\left\{ \frac{1}{u^{\alpha\rho} (u+1-e^{-z})} -
\frac{1}{(u+1)u^{\alpha\rho} }\right\}{\textrm{d}}u\notag\\
&&=(1-e^{-z})^{-\alpha\rho}\int_0^\infty \frac{1}{v^{\alpha\rho}
(v+1)}dv - \int_0^\infty\frac{1}{(u+1)u^{\alpha\rho} }{\textrm{d}}u\notag\\
&&=[(1-e^{-z})^{-\alpha\rho} -1
]\int_0^\infty\frac{1}{(u+1)u^{\alpha\rho} }{\textrm{d}}u \label{ar-I}\end{aligned}$$ where in the first equality we have applied the change of variable $y=e^\theta$, in the second equality $y=u+1$ and in the fourth equality $u=(1-e^{-z})v$. Note also that by writing $w=(u+1)^{-1}$ we also discover that $$\int_0^\infty\frac{1}{(u+1)u^{\alpha\rho} }{\textrm{d}}u = \int_0^1
(1-w)^{-\alpha\rho}w^{\alpha\rho-1}{\textrm{d}}w = \Gamma(1-\alpha\rho)\Gamma(\alpha\rho) =
\frac{\pi}{\sin\pi\alpha\rho}.
\label{ar-II}$$ In conclusion we deduce that $$\int_0^\infty ( e^{-\theta} )^{\alpha\rho+1}(1 -e^{-\theta})^{ -\alpha\rho}
(e^z-e^{-\theta})^{-1}{\textrm{d}}\theta =
[(1-e^{-z})^{-\alpha\rho} -1 ] \frac{\pi}{\sin\pi\alpha\rho}$$ and hence the required identity holds.
Finally, to complete this subsection, when $(X,{\Bbb{P}}^\uparrow_1)$ is a spectrally negative process we also gain some information concerning the scale function, $W^{\uparrow, {\rm n}}$, of its underlying Lévy process, denoted here by $\xi^{\uparrow,{\rm n}}$. Specifically, in that case it is know that $1-\rho=1/\alpha$ (and $\alpha\in(1,2)$) and that $P(-\underline{\xi}^{\uparrow,{\rm n}}_\infty
\leq x) = mW^{\uparrow, {\rm n}}(x)$, where $m=E(\xi^{\uparrow,{\rm n}}_1)$. This implies $$W^{\uparrow, {\rm n}}(x)(x) = \frac{1}{m}(1 - e^{-x})^{\alpha\rho}
= \frac{1}{m}(1 - e^{-x})^{\alpha - 1}.$$ Recall that for a given spectrally negative Lévy process it is known that the Laplace transform of the scale function is given by the inverse of the associated Laplace exponent (see for instance Theorem VII.8 in Bertoin [@Be]). We can therefore compute the Laplace exponent $\psi^\uparrow(\theta) =
\log E(e^{\theta \xi^{\uparrow,{\rm n}}_1})$ for $\theta\geq 0$, as follows: $$\begin{aligned}
\psi^\uparrow(\theta) &=&m \left(\int_0^\infty e^{-\theta x}
(1- e^{-x})^{\alpha-1}{\textrm{d}}x\right)^{-1}\notag\\
&=&m\left(\int_0^1 u^{\theta - 1}(1-u)^{\alpha -1}{\textrm{d}}u\right)^{-1}\notag=m\frac{\Gamma(\theta+\alpha)}{\Gamma(\theta)\Gamma(\alpha)}.\label{-1}\end{aligned}$$
The knowledge of the scale function allow us to write a stronger result than that given in Corollary \[onesideduparrow0\] as follows.
Let $\underline{\xi}^{\uparrow,{\rm n}}_t=\displaystyle\inf_{0\le s\le
t}\xi^{\uparrow,{\rm n}}_s$. For $v<0$, $\theta\geq 0, \phi\geq \eta$ and $\eta\in[0,-v]$ we have $$\begin{aligned}
&& P\Big(v- \xi^{\uparrow,{\rm n}}_{T^-_v} \in {\textrm{d}}\theta,
\xi^{\uparrow,{\rm n}}_{T^-_v - }-v\in
{\textrm{d}}\phi, \underline{ \xi}^{\uparrow,{\rm n}}_{T^-_v - }-v\in {\textrm{d}}\eta\Big)\\
&&= K^{-1} \, (1 - e^{v+\eta})^{\alpha -
2}(e^{v+\eta})(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}{\textrm{d}}\theta
{\textrm{d}}\phi {\textrm{d}}\eta,
\end{aligned}$$ where $$K=\frac{e^{(\alpha-2)v}}{\alpha(\alpha-1)}\int_1^{e^{-v}} \frac{(e^{-v}-y)}{y(y-1)^{\alpha-1}}{\textrm{d}}y-\frac{(1-e^v)^{\alpha-1}}{\alpha(\alpha-1)}\frac{\pi}{\sin \pi(\alpha-1)}.$$
[*Proof*]{}. First recall that the process $\xi^{\uparrow,{\rm n}}$ drifts towards $+\infty$ a.s. Taking this account, we have from Example 8 of Doney and Kyprianou [@dk] that the required probability is proportional to $$W^{\uparrow, {\rm n}}(-v-{\textrm{d}}\eta)\pi^\uparrow(-\theta -\phi){\textrm{d}}\theta {\textrm{d}}\phi.$$ Hence the triple law of interest has a density with respect to ${\textrm{d}}\theta {\textrm{d}}\phi {\textrm{d}}\eta$ which is proportional to $$(1 -
e^{v+\eta})^{\alpha -
2}(e^{v+\eta})(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}.$$ For convenience let us write the constant of proportionality as $K^{-1}$. As $(X,{\Bbb{P}}^\uparrow_1)$ is derived from a spectrally negative stable process, it cannot creep downwards (cf. p175 of Bertoin [@Be]). This allows us to compute the unknown constant via the total probability formula and after a straightforward computation, we have $$\begin{aligned}
K& =&\int_0^\infty\int_0^\infty\int_0^{-v}(1 -
e^{v+\eta})^{\alpha - 2}
(e^{v+\eta})(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}{\textrm{d}}\theta
{\textrm{d}}\phi {\textrm{d}}\eta\\
&=&\frac{e^{(\alpha-2)v}}{\alpha(\alpha-1)}\int_1^{e^{-v}} \frac{(e^{-v}-y)}{y(y-1)^{\alpha-1}}{\textrm{d}}y-\frac{(1-e^v)^{\alpha-1}}{\alpha(\alpha-1)}\frac{\pi}{\sin \pi(\alpha-1)}\end{aligned}$$ and the proof is complete.
Calculations for $\xi^*$
------------------------
Henceforth we shall assume that $(X,\mathbb{P}_x)$ is an $\alpha$-stable process killed on first exit of the positive half line starting from $x>0$. As before, unless otherwise stated, we shall assume that there are two-sided jumps, moreover, spectrally one-sided results may be considered as limiting cases of the two sided jumps case. We start with the two- and one-sided exit problems with the latter as a limiting case of the former. We offer no proof as the calculations are essentially the same.
Fix $\theta\geq 0$ and $-\infty<v<0<u<\infty$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^*_{T^+_u}- u\in {\textrm{d}}\theta; T^+_u< T^-_v\Big) \\
&&=
\frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}(1-e^v)^{\alpha\rho}\\
&&\hspace{1cm}\times
(e^{u+\theta} )(e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}(e^{u+\theta}
-e^v)^{-\alpha\rho}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^*_{T^-_v}\in {\textrm{d}}\theta; T^+_u> T^-_v\Big) \\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}(e^u -1)^{\alpha(1-\rho)}\\
&&\hspace{1cm}\times
( e^{v-\theta} )(e^{v} -e^{v-\theta})^{ -\alpha\rho}
(e^u -e^{v-\theta})^{-\alpha(1-\rho)}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
Fix $\theta\geq 0$ and $-\infty<v<0<u<\infty$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^*_{T^+_u}- u\in {\textrm{d}}\theta\Big) \\
&&= \frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}
( e^{u+\theta} )^{1-\alpha\rho}(e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^*_{T^-_v}\in {\textrm{d}}\theta; T^-_v<\infty\Big) \\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}
( e^{v-\theta} )(e^{v}
-e^{v-\theta})^{ -\alpha\rho}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
One may think of computing the distribution of the maximum, $\overline{\xi}^*_\infty$, and the minimum, $\underline{\xi}^*_\infty$, of $\xi^*$ in a similar way to the previous section by integrating out $u$ and $v$ in the above corollary. The law of the minimum was already computed in Caballero and Chaumont (2006) and we refrain from producing the alternative computations here. For the maximum, an easier approach is at hand. Since $\xi^*$ is derived from a stable process killed on first exit of the positive half line one may write $$P\Big(\overline{\xi}^*_\infty \leq z\Big)=P\Big(\exp\{\overline{\xi}^*_\infty\} \leq e^z\Big) =
\mathbf{P}_1(\sigma^+_{e^z} > \sigma^-_0) = \mathbf{P}_{e^{-z}}(\sigma^+_1>\sigma^-_0).$$ The probability on the right hand side above may be obtained from Theorem \[rogozin\] by a straightforward integration. The latter calculation has already been performed however in Rogozin [@ro] and is equal to $$\frac{\Gamma(\alpha)}{ \Gamma(\alpha\rho)\Gamma(\alpha(1-\rho))}\int_0^{1-e^{-z}} y^{\alpha\rho-1}
(1-y)^{\alpha (1-\rho) -1}{\textrm{d}}y$$ Hence, together with the result for the minimum from Caballero and Chaumont (2006) which we include for completeness, we have the following corollary.
For $z \geq 0$ we have that $$P\Big(\overline{\xi}^*_\infty \in dz\Big)= \frac{\Gamma(\alpha)}{\Gamma(\alpha\rho)\Gamma(\alpha(1-\rho))}
(e^{-z})^{\alpha(1-\rho)}(1- e^{-z})^{\alpha\rho - 1}dz$$ and $$P\Big(-\underline{\xi}^*_\infty \in dz\Big) = \frac{1}{\Gamma(\alpha\rho)\Gamma(1-\alpha\rho)}
(e^z-1)^{\alpha\rho}dz.$$
In the case that $\xi^*$ is spectrally one sided, it seems difficult to use the above result to extract information about any underlying scale functions. The reason for this is that the process $\xi^*$ is exponentially killed at a rate which is intimately linked to its underlying parameters and not at a rate which can be independently varied.
Calculations for $\xi^\downarrow$
---------------------------------
Henceforth we shall assume that $(X,\mathbb{P}^\downarrow_x)$ is an $\alpha$-stable process conditioned to hit zero continuously starting from $x>0$. Again, unless otherwise stated, we shall assume that there are two-sided jumps, and spectrally one-sided results may be considered as limiting cases of the two sided jumps case. We follow the same programme as the previous two sections dealing with the two- and one-sided exit problems without offering proofs since they follow from the calculations for $\xi^\uparrow$ and Proposition \[4590\]
Fix $\theta \geq 0$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^\downarrow_{T^+_u}- u\in {\textrm{d}}\theta; T^+_u< T^-_v\Big) \\
&&= \frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}(1-e^v)^{\alpha\rho}\\
&&\hspace{1cm}\times
(e^{u+\theta} )^{\alpha\rho}(e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}
(e^{u+\theta} -e^v)^{-\alpha\rho}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^\downarrow_{T^-_v}\in {\textrm{d}}\theta; T^+_u> T^-_v\Big) \\
&&=\frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}(1-e^v)^{\alpha\rho}\\
&&\hspace{1cm}\times
( e^{v-\theta} )^{\alpha\rho}(e^{v} -e^{v-\theta})^{ -\alpha\rho}
(e^u -e^{v-\theta})^{-\alpha(1-\rho)}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
\[onesideduparrow\] Fix $\theta\geq 0$. $$\begin{aligned}
&&\hspace{-1cm}P\Big(\xi^\uparrow_{T^+_u}- u\in {\textrm{d}}\theta; T^+_u<\infty\Big) \\
&&= \frac{\sin\pi\alpha(1-\rho)}{\pi} (e^u-1)^{\alpha(1-\rho)}
(e^{u+\theta} -e^u)^{ -\alpha(1-\rho)}(e^{u+\theta} -1)^{-1}{\textrm{d}}\theta\end{aligned}$$ and $$\begin{aligned}
&&\hspace{-1cm}P\Big(v-\xi^\uparrow_{T^-_v}\in {\textrm{d}}\theta; T^-_v<\infty\Big) \\
&&=\frac{\sin\pi\alpha\rho}{\pi} (1-e^v)^{\alpha\rho}
( e^{v-\theta} )^{\alpha\rho}(e^{v} -e^{v-\theta})^{ -\alpha\rho}(1-e^{v-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$
From the above corollary we proceed to obtain the law of the maximum of $\xi^\downarrow$ (recalling that it is a process with drift to $-\infty$).
For $z\geq 0$ $$P\Big(\overline{\xi}^\downarrow_\infty \leq z\Big) =
(1-e^{-z})^{\alpha(1-\rho)}$$
[*Proof*]{}. Similarly to the calculations in Section \[xiup\] we make use of the fact that $$P\Big(\overline{\xi}^\downarrow_\infty \leq z\Big) =
P(T^+_z =\infty).$$ Hence $$\begin{aligned}
&&\hspace{-1cm}P\Big(\overline{\xi}^\downarrow_\infty \leq z\Big) \\
&&=1 - \frac{\sin\pi\alpha(1-\rho)}{\pi}
(e^z-1)^{\alpha(1-\rho)}\int_0^\infty
(e^{z+\theta} -e^z)^{ -\alpha(1-\rho)}(e^{z+\theta} -1)^{-1}{\textrm{d}}\theta\\
&&1 - \frac{\sin\pi\alpha(1-\rho)}{\pi} (1-e^{-z})^{\alpha(1-\rho)}
\int_0^\infty (e^{-\theta})^{\alpha(1-\rho)+1} (1-e^{-\theta} )^{
-\alpha(1-\rho)}(e^{z} -e^{-\theta})^{-1}{\textrm{d}}\theta.\end{aligned}$$ Next note that the integral on the right hand side above has been seen before in (\[ar-I\]) except for the case that $\rho$ is replaced by $1-\rho$. We thus obtain from (\[ar-I\]) and (\[ar-II\]) $$\int_0^\infty (e^{-\theta})^{\alpha(1-\rho)+1} (1-e^{-\theta} )^{
-\alpha(1-\rho)}(e^{z} -e^{-\theta})^{-1}d\theta
=[(1-e^{-z})^{-\alpha(1-\rho)} -1]\frac{\pi}{\sin\pi\alpha(1-\rho)}$$ and hence $$P\Big(\overline{\xi}^\downarrow_\infty \leq z\Big)
=(1-e^{-z})^{\alpha(1-\rho)}$$ as required.
Now, we suppose that $(X,{\Bbb{P}}^{\downarrow}_1)$ has only positve jumps, in which case $\rho =
1/\alpha$. We denote by $\xi^{\downarrow,{\rm p}}$ its underlying Lévy process in this particular case. The associated scale function of $\xi^{\downarrow,{\rm p}}$ can be identified as $$\label{duality}
E(-\xi^{\downarrow,{\rm p}}_1)W^{\downarrow,{\rm p}}(x) = P\Big(\overline{\xi}^{\downarrow,{\rm p}}_\infty \leq x\Big) = (1-
e^{-x})^{\alpha-1}=W^{\uparrow, {\rm n}}(x)E(\xi^{\uparrow, {\rm n}}_1),$$ where $W^{\uparrow, {\rm n}}$ is the scale function of the spectrally negative L' evy process $\xi^{\uparrow,{\rm n}}$.\
This observation reflects the duality property for positive self-similar Markov processes in this particular case (see section 2 in Bertoin and Yor [@BeY]). More precisely, we have the duality property between the resolvent operators of $(X,{\Bbb{P}}^{\uparrow}_{x})$, when it is spectrally negative, and $(X,{\Bbb{P}}^{\downarrow}_{x})$, for $x>0$.
From the identification of the scale function in (\[duality\]) and Lemma 2, it is possible to give a triple law for the process $\xi^{\downarrow,{\rm p}}$ at first passage over the level $x>0$.
Let $\overline{\xi}^{\downarrow,{\rm p}}_t=\displaystyle\sup_{0\le s\le
t}\xi^{\downarrow,{\rm p}}_s$. For $x>0$, $\theta\geq 0, \phi\geq \eta$ and $\eta\in[0,x]$ we have $$\begin{aligned}
&& P\Big(\xi^{\downarrow,{\rm p}}_{T^+_x}-x \in {\textrm{d}}\theta,
x-\xi^{\downarrow,{\rm p}}_{T^+_x - }\in
{\textrm{d}}\phi, x-\overline{ \xi}^{\downarrow, {\rm p}}_{T^+_x - }\in {\textrm{d}}\eta\Big)\\
&&= K^{-1} \, (1 - e^{-x+\eta})^{\alpha -
2}e^{-x+\eta}e^{\theta+\phi}(e^{\theta+\phi}-1)^{-1-\alpha}{\textrm{d}}\theta
{\textrm{d}}\phi {\textrm{d}}\eta,
\end{aligned}$$ where $$K=\frac{e^{(\alpha-2)v}}{\alpha(\alpha-1)}\int_1^{e^{-v}} \frac{(e^{-v}-y)}{y(y-1)^{\alpha-1}}{\textrm{d}}y-\frac{(1-e^v)^{\alpha-1}}{\alpha(\alpha-1)}\frac{\pi}{\sin \pi(\alpha-1)}.$$
In the remainder of this section, we assume that $(X,{\Bbb{P}}^{\downarrow}_x)$ is spectrally negative and we denote its underlying Lévy process by $\xi^{\downarrow, {\rm n}}$ for its underlying Lévy processes. The identification of the scale function of the Lévy process $\xi^{\uparrow,{\rm n}}$ and Proposition \[4590\] inspires the following result which identifies the scale function of the Lévy process $\xi^{\downarrow, {\rm n}}$. We emphasized that $\xi^{\downarrow,{\rm n}}$ is in fact $\xi^{\uparrow,{\rm n}}$ conditioned to drift to $-\infty$.
The Laplace exponent of $\xi^{\downarrow,{\rm n}}$ satisfies $$\psi^{\downarrow}(\theta)
=m\frac{\Gamma(\theta-1+\alpha)}{\Gamma(\theta-1)\Gamma(\alpha)}$$ for $\theta\geq 0$, and where $m=E(-\xi^{\downarrow,{\rm n}})$. Moreover, its associated scale function may be identified as $$W^{\downarrow, {\rm n}}(x) = \frac{1}{m}(1-e^{-x})^{\alpha
-1}e^{ x}.$$
[*Proof*]{}. From the two-sided exit problem for spectrally negative Lévy processes (see for instance Chapter VII in Bertoin [@Be]), we know that $$P\Big(\underline{\xi}^{\uparrow, {\rm n}}_{T^{+}_y}>-x\Big)=\frac{W^{\uparrow, {\rm n}}(x)}{W^{\uparrow, {\rm n}}(x+y)}\qquad\textrm{ for }\,\, x,y>0.$$ On the other hand, from Proposition 1, we get that $$P\Big(\underline{\xi}^{\uparrow, {\rm n}}_{T^{+}_y}>-x\Big)=e^yP\Big(\underline{\xi}^{\downarrow, {\rm n}}_{T^{+}_y}>-x\Big)=e^y\frac{W^{\downarrow, {\rm n}}(x)}{W^{\downarrow,{\rm n}}(x+y)}.$$ Hence, from the form of the scale function of $\xi^{\uparrow,{\rm n}}$, it follows that $$P\Big(\underline{\xi}^{\downarrow, {\rm n}}_{T^{+}_y}>-x\Big)=\frac{W^{\downarrow, {\rm n}}(x)}{W^{\downarrow,{\rm n}}(x+y)}=\frac{e^x(1-e^{-x})^{\alpha-1}}{e^{x+y}(1-e^{-x-y})^{\alpha-1}}.$$ Taking $y$ to $\infty$, one deduces $$W^{\downarrow, {\rm n}}(x)=\frac{1}{m}e^x(1-e^{-x})^{\alpha-1}.$$ Finally, since $$\int_0^\infty e^{-\theta x}e^{ x}(1-e^{-x})^{\alpha-1}{\textrm{d}}x =
\int_{0}^1 u^{(\theta-1)-1}(1-u)^{\alpha-1}{\textrm{d}}u =
\frac{\Gamma(\theta-1)\Gamma(\alpha)}{\Gamma(\theta-1+\alpha)},$$ for $\theta>1$, it is clear from the Laplace transform (cf Chapter 8 of Kyprianou [@Ky]) of the scale function that $$\psi^{\downarrow} (\theta)=m\frac{\Gamma(\theta-1+\alpha)}{\Gamma(\theta-1)\Gamma(\alpha)},\, \theta\geq 0,$$ is the associated Laplace exponent of $\xi^{\downarrow, {\rm n}}$.
One may also write down a triple law for the first passage problem of $\xi^{\downarrow,{\rm n}}$ as we have seen before for $\xi^{\uparrow,{\rm n}}$.
Let $\underline{\xi}^{\downarrow,{\rm n}}_t=\displaystyle\inf_{0\le s\le
t}\xi^{\downarrow,{\rm n}}_s$. For $v<0$, $\theta\geq 0, \phi\geq \eta$ and $\eta\in[0,-v]$ we have $$\begin{aligned}
&& P\Big(v- \xi^{\downarrow,{\rm n}}_{T^-_v} \in {\textrm{d}}\theta,
\xi^{\downarrow,{\rm n}}_{T^-_v - }-v\in
{\textrm{d}}\phi, \underline{ \xi}^{\downarrow,{\rm n}}_{T^-_v - }-v\in {\textrm{d}}\eta\Big)\\
&&= K^{-1} \, e^{-q(\phi-\eta)}(e^{-(v+\eta)}+\alpha-2)(1 - e^{v+\eta})^{\alpha -
2}(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}{\textrm{d}}\theta
{\textrm{d}}\phi {\textrm{d}}\eta,
\end{aligned}$$ where $q>0$ and $$K=\frac{1}{\alpha}\int_0^{-v} \int_\eta^{\infty} e^{-q(\eta-\phi)}(e^{-v-\eta}+\alpha-2)(1-e^{v+\eta})^{\alpha-2}(e^{\phi}-1)^{-\alpha}{\textrm{d}}\phi{\textrm{d}}\eta.$$
[*Proof*]{}. First recall that the process $\xi^{\downarrow,{\rm n}}$ drifts towards $-\infty$ a.s. Again, we have from Example 8 of Doney and Kyprianou [@dk] that the required probability is proportional to $$e^{-q(\phi-\eta)}W^{\downarrow, {\rm n}}(-v-{\textrm{d}}\eta)\pi^{\downarrow}(-\theta -\phi){\textrm{d}}\theta {\textrm{d}}\phi,$$ where $q>0$ is the killing rate of the descending ladder height process (see for instance Chapter VI in Bertoin [@Be] for a proper definition) of $\xi^{\downarrow,{\rm n}}$.\
Hence the triple law of interest has a density with respect to ${\textrm{d}}\theta {\textrm{d}}\phi {\textrm{d}}\eta$ which is proportional to $$e^{-q(\phi-\eta)}(1 -
e^{v+\eta})^{\alpha -
2}(e^{-(v+\eta)} +\alpha-2)(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}.$$ As $(X,{\Bbb{P}}^\downarrow_1)$ is derived from a spectrally negative stable process, it cannot creep downwards (cf. p175 of Bertoin [@Be]). This allows us to compute the unknown constant of proportionality $K^{-1}$ via the total probability formula and after a straightforward computation, we have $$\begin{aligned}
K& =&\int_0^\infty\int_0^\infty\int_0^{-v}e^{-q(\phi-\eta)}(1 -
e^{v+\eta})^{\alpha - 2}
(e^{v+\eta})(e^{-\theta-\phi})^{\alpha}(1-e^{-\theta-\phi})^{-1-\alpha}{\textrm{d}}\theta
{\textrm{d}}\phi {\textrm{d}}\eta\\
&=&\frac{1}{\alpha}\int_0^{-v} \int_\eta^{\infty} e^{-q(\eta-\phi)}(e^{-v-\eta}+\alpha-2)(1-e^{v+\eta})^{\alpha-2}(e^{\phi}-1)^{-\alpha}{\textrm{d}}\phi{\textrm{d}}\eta.\end{aligned}$$ and the proof is complete.
Entrance laws for Lévy-Lamperti processes: points
=================================================
In this section we explore the two-point hitting problem for the Lévy-Lamperti processes $\xi^\uparrow$ and $\xi^{\downarrow}$. There has been little work dedicated to this theme in the past with the paper of Getoor [@Get] being our principle reference.
Henceforth we shall denote by $(X,\mathbf{P}_x)$ a [*symmetric*]{} $\alpha$-stable process issued from $x>0$ where $\alpha\in(1,2)$. An important quantity in the forthcoming analysis is the resolvent density of the process $(X,\mathbf{P}_x)$ killed on exiting $(0,\infty)$. The latter is known to have a density $$u(x,y){\textrm{d}}y = \int_0^\infty {\textrm{d}}t \cdot \mathbf{P}_x(X_t \in dy, t< \sigma^-_0)$$ for $x,y>0$. From Blumenthal et al. [@Bl] we know that $$\int_0^\infty {\textrm{d}}t\cdot \mathbb{P}_x(X_t \in {\textrm{d}}y, t< \sigma^+_a\wedge \sigma^-_0)
= \left\{ \frac{|x- y|^{\alpha
-1}}{2^\alpha \Gamma(\alpha/2)} \int_0^{s(x,y,a)} \frac{u^{\alpha/2 -1}}{(u+1)^{1/2}}{\textrm{d}}u
\right\}{\textrm{d}}y$$ where $$s(x,y,a) = \frac{4xy}{(x-y)^2}\frac{(a-x)(a-y)}{a^2}.$$ It now follows taking limits as $a\uparrow\infty$ that $$u(x,y)= \frac{1}{2^\alpha \Gamma(\alpha/2)} |x- y|^{\alpha
-1}\int_0^{4xy/(x-y)^2} \frac{u^{\alpha/2 -1}}{(u+1)^{1/2}}{\textrm{d}}u.$$ According to the method presented in Getoor [@Get] one may compute $$\mathbf{P}_x(X_{\sigma_{\{a,b\}}} =a; \, \sigma_{\{a,b\}} < \sigma^-_0)$$ where $\sigma_{\{a,b\}}= \inf\{t>0 : X_t = a \text{ or }b\}$ and $a,b>0$ using the following technique. The two point hitting probability in Getoor [@Get] is given by the formula $$\mathbf{P}_x(X_{\sigma_{\{a,b\}}} =a; \, \sigma_{\{a,b\}} < \sigma^-_0) = -
\frac{q(x,a)}{q(x,x)} \label{hittingprob}$$ where the $\{x,a,b\}\times\{x,a,b\}$-matrix $Q$ is defined by $$Q= - U^{-1} \label{inverse}$$ and the $\{x,a,b\}\times\{x,a,b\}$-matrix is given by $$U= \left(
\begin{array}{ccc}
u(x,x) & u(x,a) & u(x,b)\\
u(a,x) & u(a,a) & u(a,b) \\
u(b,x) & u(b,a) & u(b,b)
\end{array}
\right).$$ In particular an easy computation shows that $$\mathbf{P}_x(X_{\sigma_{\{a,b\}}} =a; \, \sigma_{\{a,b\}} < \sigma^-_0) =\frac{\frac{u(x,a)}{u(b,a)} - \frac{u(x,b)}{u(b,b)}}{\frac{u(a,a)}{u(b,a)} - \frac{u(a,b)}{u(b,b)}}
\label{hittingprob}$$
Recalling the definitions of $\xi^\uparrow$ and $\xi^\downarrow$ as the Lévy-Lamperti processes associated now with our symmetric stable process conditioned to stay positive and conditioned to be killed continuously at the origin respectively we obtain the following result.
Fix $\alpha\in(1,2)$ and $-\infty<v<0<u<\infty$. Define $$T_{\{v,u\}}=\inf\{t>0 : \xi_t\in\{v,u\}\}$$ where $\xi$ plays the role of either $\xi^\uparrow$ or $\xi^\downarrow$. We have $$P\Big(\xi^\uparrow_{ T_{\{v,u\}} } =v\Big) = (e^v)^{\alpha/2} f(1, e^v, e^u)$$ and $$P\Big(\xi^\downarrow_{T_{\{v,u\}}} =v\Big) = (e^v)^{\alpha/2-1} f(1, e^v, e^u)$$ where $$f(x,a,b) = \frac{\frac{u(x,a)}{u(b,a)} - \frac{u(x,b)}{u(b,b)}}{\frac{u(a,a)}{u(b,a)} - \frac{u(a,b)}{u(b,b)}}.$$
Exponential functionals of Lévy-Lamperti processes.
===================================================
We begin this section by recalling a crucial expression for the entrance law at 0 of pssMp’s. In [@beC; @BeY], the authors proved that if a non arithmetic Lévy process $\xi$ satisfies $E(|\xi_1|)<\infty$ and $0<E(\xi_1)<+\infty$, then its corresponding pssMp $(X,{\Bbb{P}}_{x})$ in the Lamperti representation converges weakly as $x$ tends to $0$, in the sense of finite dimensional distributions towards a non degenerated probability law ${\Bbb{P}}_0$. Under these conditions, the entrance law under ${\Bbb{P}}_0$ is described as follows: for every $t>0$ and every measurable function $f:{\mbox{\rm I\hspace{-0.02in}R}}_+\to
{\mbox{\rm I\hspace{-0.02in}R}}_+$, $$\label{entlaw}
{\Bbb{E}}_0\big(f(X_t)\big)=\frac{1}{\alpha E(\xi_1)}E\left(I(\xi)^{-1}
f\big(tI(\xi)^{-1}\big)\right)\,,$$ where $I(\xi)$ is the exponential functional: $$I(\xi)=\int_{0}^{\infty} \exp\{-\alpha \xi_{s}\}\,
{\textrm{d}}s.$$ Necessary and sufficient conditions for the weak convergence of $(X, {\Bbb{P}}_x)$ on the Skorokhod’s space were given in [@CCh]. Recall that $(X, {\Bbb{P}_x}^{\uparrow})$ denotes a stable Lévy process conditioned to stay positive as it has been defined in section \[prelim\]. Then we easily check that $\xi^\uparrow$ satisfies conditions for the weak convergence of $(X,{\Bbb{P}}_x^{\uparrow})$ given in [@beC; @BeY; @CCh]. Note also that in this particular case, the weak convergence of $(X,{\Bbb{P}}^{\uparrow}_x)$ had been proved in a direct way in [@Ch]. We denote the limit law by ${\Bbb{P}}^{\uparrow}$.\
We first investigate the tail behaviour of the law of $I(\xi^\uparrow)$.
\[asymptotic\] The law of $I(\xi^\uparrow)$ is absolutely continuous with respect to the Lebesgue measure. The density of $I(\xi^\uparrow)^{-1}$ is given by: $$\label{elaw}
P\Big(I(\xi^\uparrow)^{-1}\in {\textrm{d}}y\Big)=\alpha E(\xi^{\uparrow}_1)y^{\alpha\rho -1}q_1(y){\textrm{d}}y,$$ where $q_t$ is the density of the entrance law of the excursion measure of the reflected process $(X-\underline{X}, \mathbf{P}_0)$, where $\underline{X}_t=\inf_{0\leq s\leq t}X_s$. Moreover, the law of $I(\xi^\uparrow)$ behaves as $$\label{at}
P(I(\xi^{\uparrow})\ge x)\sim C_1
x^{-\alpha}\,,\;\;\mbox{as $x\rightarrow+\infty$.}$$ If $X$ has positive jumps, then $$\label{at0}
P(I(\xi^{\uparrow})\le x)\sim C_2 x^{\alpha(\rho-1)-1}\,,\;\mbox{as
$x\rightarrow0$.}$$ The constants $C_1$ and $C_2$ depend only on $\alpha$ and $\rho$.
In the case where the process has positive jumps, the law of $I(\xi^\uparrow)$ is given explicitly in the next theorem.\
[*Proof*]{}. Let $n$ be the measure of the excursions away form 0 of the reflected process $X-\underline{X}$ under $\mathbf{P}_0$. It is proved in [@MS] that the entrance law of $n$ is absolutely continuous with respect to the Lebesgue measure. Let us denote by $q_t$ its density. Then from [@Ch], the entrance law of $(X,{\Bbb{P}}^{\uparrow})$ is related to $q_1$ by $${\Bbb{P}}^{\uparrow}(X_1\in {\textrm{d}}y)=y^{\alpha\rho}q_1(y){\textrm{d}}y\,,\;\;\;y\ge0\,.$$ We readily derive (\[elaw\]) from identity (\[entlaw\]). Moreover from (3.18) in [@MS]: $$\int_0^x q_1(y)\,{\textrm{d}}y\sim Cx^{\alpha(1-\rho)+1}\,,\;\;\;\mbox{as $x\rightarrow0$,}$$ and from (3.20) of the same paper, if $X$ has positive jumps, then: $$\int_x^\infty q_1(y)\,{\textrm{d}}y\sim C'x^{-\alpha}\,,\;\;\;\mbox{as $x\rightarrow+\infty$.}$$ This together with (\[entlaw\]) implies (\[at\]) and (\[at0\]). The constants $C$ and $C'$ depend only on $\alpha$ and $\rho$. Another way to prove part $(i)$ of this theorem is to use a result due to Méjane [@Me] and Rivero [@ri1] which asserts that for a non arithmetic Lévy process $\xi$, if Cramer’s condition is satisfied for $\theta>0$, i.e. $E(\exp\theta\xi_1)=1$ and $E(\xi_1^+\exp{\theta\xi_1})<\infty$, then $P(I(\xi)\ge x)\sim
Cx^{-\alpha\theta}$. These arguments and Proposition \[4590\] allow us to obtain the asymptotic behaviour at $+\infty$ of $P(I(-\xi^{\downarrow})\ge x)$:
\[downasymptotic\] The law of $I(-\xi^\downarrow)$ behaves as $$P(I(-\xi^{\downarrow})\ge x)\sim C_3 x^{-\alpha}\,,\;\;\mbox{as
$x\rightarrow+\infty$.}$$ The constant $C_3$ depends only on $\alpha$ and $\rho$.
Now we consider the exponential functional $$I(-\xi^*)=\int_{0}^{\infty} \exp\{\alpha\xi^{*}_s\}\, {\textrm{d}}s.$$ Recall from section \[prelim\] that $\xi^*$ is the Lévy process which is associated to the pssMp $(X_t{\mbox{\rm 1\hspace{-0.04in}I}}_{\{t<T\}},\mathbf{P}_x)$ by the Lamperti representation. From this representation, we may check path by path the equality $$x^\alpha I(-\xi^*)=T\,.$$ Moreover, it follows from Lemma 1 in [@CC] that when $(X,\mathbf{P}_x)$ has negative jumps, $\mathbf{P}_x(T\le t)\sim
\frac{c_-t}{\alpha x^\alpha}$, as $t$ tends to 0. This result leads to:
\[42361\] Suppose that $\xi^*$ has negative jumps, then the law of $I(-\xi^*)$ behaves as $$P(I(-\xi^*)\le x)\sim \frac{c_-}{\alpha} x^{-1}\,,\;\;\mbox{as
$x\rightarrow0$.}$$
In the remainder of this section, we assume that $(X, {\Bbb{P}_x}^{\uparrow})$ has no positive jumps.
The law of exponential functional $I(\xi^{\uparrow,{\rm n}})=
\int_{0}^{\infty} \exp\{-\alpha\xi^{\uparrow,{\rm n}}_{s}\}\, {\textrm{d}}s$ is absolutely continuous with respect to the Lebesgue measure and has a continuous density $p^{\uparrow,{\rm n}}(\cdot)$ which has the following representation by power series $$p^{\uparrow, {\rm n}}(x)=-\frac{c^{-1}}{\pi
x}\sum_{n=1}^{\infty}\Gamma\left(1+\frac{n}{\alpha}\right)\sin\left(\frac{\pi
n}{\alpha}\right)\frac{(-x^{-1/\alpha})^n}{n!}, \qquad \textrm{for
}\quad x>0,$$ where $c=c_{-}\Gamma(2-\alpha)\alpha^{-1}(\alpha-1)^{-1}>0$.\
Moreover the positive entire moments of $(X,{\Bbb{P}}^{\uparrow})$, for $t>0$, are given by the identity $$\label{moments}
{\Bbb{E}}^{\uparrow}\Big(\big(X_t\big)^k\Big)=(mt)^k
\frac{\Gamma(\alpha(k+1))}{\Gamma(\alpha)^{(k+1)}k!}, \qquad k\geq
1,$$ and its law is absolutely continuous with respect to the Lebesgue measure and has a continuous density $p_t(\cdot)$ which has the following representation by power series $$p_t(x)=-\frac{t^{-1/\alpha}(\alpha-1)}{\Gamma(2-\alpha)c_{-} m \pi}
\sum_{n=1}^{\infty}\Gamma\left(1+\frac{n}{\alpha}\right)\sin\left(\frac{\pi
n}{\alpha}\right)\frac{(-x^{1/\alpha})^n}{n!}, \qquad\textrm{for
}\quad x>0.$$
[*Proof*]{}. From Bertoin and Yor [@BeY2] , we know that the distribution of the exponential functional of a spectrally negative Lévy process is determined by its negative entire moments. In particular, the exponential functional for $\xi^{\uparrow,{\rm n}}$ satisfies $$\label{nmom}
E\left(I(\xi^{\uparrow, {\rm n}})^{-k}\right)= \alpha
m^k\frac{\Gamma(k\alpha)}{\Gamma(\alpha)^{k}(k-1)!},$$ with the convention that the right-hand side equals $\alpha m$ for $k=1$. In particular, from (\[entlaw\]) we have that $${\Bbb{E}}^{\uparrow}\Big(\big(X_t\big)^k\Big)=(mt)^k
\frac{\Gamma(\alpha(k+1))}{\Gamma(\alpha)^{(k+1)}k!},$$ which proves the identity (\[moments\]).\
Now, from the time reversal property of Theorem VII.18 in [@Be], we deduce that the last passage time of $(X,{\Bbb{P}}^{\uparrow})$, defined by $$U_{x}=\sup\Big\{t\geq 0:\, X_t\leq x\Big\}\qquad \textrm{for }\quad x\geq 0,$$ is a stable subordinator of index $1/\alpha$. More precisely, its Laplace exponent is given by $\Phi(\lambda)=(\lambda/c)^{1/\alpha}$, where $c=c_{-}\Gamma(2-\alpha)\alpha^{-1}(\alpha-1)^{-1}$. According to Zolotarev [@Zo], stable subordinators have continuous densities with respect to the Lebesgue measure which may be represented by power series. More precisely, the density of a normalized stable subordinator of index $\beta\in(0,1)$, i.e. $\Phi(\lambda)=\lambda^{\beta}$, is given by $$\label{subd}
\rho_t(x, \beta)=-\frac{t^{-1/\beta}}{\pi
x}\sum_{n=1}^{\infty}\Gamma(1+n\beta)\sin(\beta\pi
n)\frac{(-x^{-\beta})^n}{n!}, \qquad \textrm{for }\quad x>0.$$ On the other hand from Proposition 1 in [@CP], we know that $U_x$ has the same law as $x^{\alpha}I(\xi^{\uparrow, {\rm n}})$. Hence, $I(\xi^{\uparrow, {\rm n}})$ satisfies that $$E\Big(\exp\big\{-\lambda
I(\xi^{\uparrow, {\rm n}})\big\}\Big)=e^{-(\lambda/c)^{1/\alpha}}, \qquad
\lambda\geq 0,$$ and its density $p^{\uparrow, {\rm n}}(x)$ is given by $ \rho_{c^{1/\alpha}}(x,1/\alpha)$, which proves the first part of the theorem.\
Finally from (\[entlaw\]), we deduce that the density of $X^{(0)}_1$ is given by $$p_1(x)=-\frac{c^{-1}}{\alpha m \pi}
\sum_{n=1}^{\infty}\Gamma\left(1+\frac{n}{\alpha}\right)\sin\left(\frac{\pi
n}{\alpha}\right)\frac{(-x^{1/\alpha})^n}{n!}, \qquad\textrm{for
}\quad x>0.$$ The proof is now complete.
The exponential functional $I(-\xi^{*})=\int_{0}^{\infty} \exp\{\alpha\xi^{*}_s\}\, {\textrm{d}}s$ has a continuous density $p^*(\cdot)$ with respect to the Lebesgue measure which has the following representation by power series $$p^{*}(x)=c^{1/\alpha}\sum_{n=1}^{\infty}\frac{\alpha
n-1}{\Gamma(\alpha
n)\Gamma(-n+1+1/\alpha)}x^{\alpha(2-n\alpha)},\quad
\textrm{for}\quad x>0,$$ where $c=c_{+}\Gamma(2-\alpha)\alpha^{-1}(\alpha-1)^{-1}>0.$
[*Proof:*]{} First, let us define $\hat{X}=-X$ and denote by $\hat{\mathbf{P}}$ for its law starting from $0$. Note that the process $(X,\hat{\mathbf{P}})$ is a stable Lévy process with no negative jumps of index $\alpha\in (1,2)$ starting from $0$. From the Lamperti representation, it is clear that $T=I^{*}$ and from the self-similar property, we have $$\begin{split}
\mathbf{P}_1(T>t)&=\mathbf{P}_0(\sigma^{-}_{-1}>t)=\mathbf{P}_0
\left(\inf_{0\leq s\leq t}X_s>-1\right)\\
&=\mathbf{P}_0\left(t^{1/\alpha}\inf_{0\leq s\leq 1}X_s>-1\right)=
\hat{\mathbf{P}}\left(t^{1/\alpha}\sup_{0\leq s\leq 1}X_s<1\right)\\
&=\hat{\mathbf{P}}\left(\left(\frac{1}{\sup_{0\leq s\leq
1}X_s}\right)^{\alpha}> t \right).
\end{split}$$ Hence the exponential functional $I^{*}$, under $\mathbf{P}_1$, has the same law as $(\sup_{0\leq s\leq 1}X_{1})^{-\alpha}$, under $\hat{\mathbf{P}}$.\
Recently, Bernyk, Dalang and Peskir [@BDP] computed the density of the supremum of a stable Lévy process with no negative jumps of index $\alpha\in (1,2)$. More precisely with our notation, the density $f$ of $\sup_{0\leq s\leq 1}X_{1}$, under $\hat{\mathbf{P}}$, is described as follows $$f(x)=c^{1/\alpha}\sum_{n=1}^{\infty}\frac{\alpha n-1}{\Gamma(\alpha
n)\Gamma(-n+1+1/\alpha)}x^{n\alpha-2},\quad \textrm{for}\quad x>0.$$ Therefore the density $p^{*}$ of $I^{*}$ is given by $$p^{*}(x)=c^{1/\alpha}\sum_{n=1}^{\infty}\frac{\alpha
n-1}{\Gamma(\alpha
n)\Gamma(-n+1+1/\alpha)}x^{\alpha(2-n\alpha)},\quad
\textrm{for}\quad x>0,$$ which completes the proof.
[99]{} V. Bernyk, R.C. Dalang and G. Peskir: The law of the supremum of a stable Lévy process with no negative jumps. [*Preprint,*]{} (2006).
J. Bertoin: [*Lévy Processes.*]{} Cambridge University Press, Cambridge, (1996).
<span style="font-variant:small-caps;">J. Bertoin and M.E. Caballero</span>: Entrance from $0+$ for increasing semi-stable Markov processes. *Bernoulli.*, **8**, no. 2, 195-205, (2002).
<span style="font-variant:small-caps;">J. Bertoin and M. Yor</span>: The entrance laws of self-similar Markov processes and exponential functionals of Lévy processes. *Potential Anal.*, **17**, no. 4, 389-400, (2002).
<span style="font-variant:small-caps;">J. Bertoin and M. Yor</span>: On the entire moments of self-similar Markov processes and exponential functionals of Lévy processes. *Ann. Fac. Sci. Toulouse VI Ser. Math.*, **11**, no. 1, 33-45, (2002).
<span style="font-variant:small-caps;">J. Bertoin and M. Yor</span>: Exponential functionals of Lévy processes. *Probab. Surv.* **2**, 191-212, (2005).
R. Blumenthal, R.K. Getoor, and D.B. Ray: On the distribution of first hits for the symmetric stable processes. [*Trans. Amer. Math. Soc.*]{}, [**99**]{}, 540–554, (1961).
S.I. Boyarchenko and S.Z. Levendorskii: [*Non-Gaussian Merton– Black–Scholes theory*]{}.World Scientific, Singapore, (2002).
<span style="font-variant:small-caps;">M.E. Caballero and L. Chaumont</span>: Weak convergence of positive self-similar Markov processes and overshoots of Lévy processes. *Annals of Probab.*, **34**, 1012-1034, (2006).
M.E. Caballero and L. Chaumont: Conditioned stable Lévy processes and the Lamperti representation. [*J. Appl. Probab.*]{}, [**43**]{}, 967–983, (2006).
P. Carr, H. Geman, D.B. Madan, and M. Yor: The Fine Structure of Asset Returns: An Empirical Investigation. [*Journal of Business*]{}, [**75**]{}, 305-32.
L. Chaumont: Conditionings and path decompositions for Lévy processes. [*Stoch. Process. Appl.*]{}, [**64**]{}, 39-54, (1996).
<span style="font-variant:small-caps;">L. Chaumont and J. C. Pardo</span>: The lower envelope of positive self-similar Markov processes. *Elect. J. Probab.* **11**, 1321-1341, (2006).
R. Cont and P. Tankov: [*Financial Modeling with Jump Processes*]{}. Chapman and Hall/CRC, Boca Raton, FL., (2004).
R.A. Doney: *Fluctuation theory for Lévy processes. Ecole d’été de Probabilités de Saint-Flour, Lecture Notes in Mathematics No. 1897. Springer, (2007).*
<span style="font-variant:small-caps;">R.A. Doney and A.E. Kyprianou</span>: Overshoots and undershoots of Lévy processes. *Ann. Appl. Probab.*, **16**, no.1, 91-106, (2006).
R. K. Getoor: Continuous additive functionals of a Markov process with applications to processes with independent increments. [*J. Math. Anal. Appl.*]{}, [**13**]{}, 132-153, (1966).
S.G. Kou and H. Wang: First passage times of a jump diffusion process. [*Adv. in Appl. Probab.*]{}, [**35**]{} 504–531, (2003).
A.E. Kyprianou: *Introductory lectures of fluctuations of Lévy processes with applications. Springer, (2006).*
J.W. Lamperti: Semi-stable Markov processes. [*Z. Wahrsch. verw. Gebiete*]{}, [**22**]{}, 205-225, (1972).
A. Lewis and E. Mordecki: Wiener-Hopf factorization for Lévy processes having negative jumps with rational transforms. [*Preprint*]{}, (2005).
O. Méjane: Upper bound of a volume exponent for directed polymers in a random environment. [*Ann. Inst. H. Poincaré Probab. Statist.*]{}, [**40**]{}, no. 3, 299–308, (2004).
D. Monrad and M.L. Silverstein: Stable processes: sample function growth at a local minimum. [*Z. Wahrsch. Verw. Gebiete*]{} [**49**]{}, no. 2, 177–210, (1979).
M.R. Pistorius: On maxima and ladder processes for a dense class of Lévy processes. [*J. Appl. Probab.*]{}, [**43**]{}, 208-220, (2006).
V. Rivero: [*Recouvrements aléatoires et processus de Markov auto-similaires.*]{} Thèse de doctorat de l’université Paris VI, (2004).
V. Rivero: Recurrent extensions of self-similar Markov processes and Cramér’s condition. [*Bernoulli*]{}, [**11**]{}, no. 3, 471–509, (2005).
B.A. Rogozin: The distribution of the first hit for stable and asymptotically stable random walks on an interval. [*Theory Probab. Appl.*]{} [**17**]{}, 332–338, (1972).
K.I. Sato: [*Lévy processes and infinitely divisible distributions*]{}. Cambridge University Press, Cambridge, (1999).
W. Schoutens: *Lévy Processes in Finance. Pricing Finance Derivatives*. Wiley, New York, (2003).
V.M. Zolotarev: One-dimensional stable distributions. Translations of Mathematical Monographs, 65. [*American Mathematical Society*]{}, Providence, RI, (1986).
[^1]: Laboratoire de Probabilités et Modèles Aléatoires, Université Pierre et Marie Curie, 4, Place Jussieu - 75252 PARIS CEDEX 05. E-mail: jcpm20@bath.ac.uk.\
Reasearch supported by EPSRC grant EP/D045460/1.\
$^*$Corresponding author.
|
---
abstract: 'Discrete (family) symmetries might play an important role in models of elementary particle physics. We discuss the origin of such symmetries in the framework of consistent ultraviolet completions of the standard model in field and string theory. The symmetries can arise due to special geometrical properties of extra compact dimensions and the localization of fields in this geometrical landscape. We also comment on anomaly constraints for discrete symmetries.'
---
\
[**Origin of family symmetries**]{}
**Hans Peter Nilles$^{a}$, Michael Ratz$^{b}$, Patrick K.S. Vaudrevange$^{c}$**\
*$^a$ Bethe Center for Theoretical Physics and Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn, Germany*\
*$^b$ Physik–Department T30, Technische Universität München, James–Franck–Straße, 85748 Garching, Germany*\
*$^c$ Deutsches Elektronen–Synchrotron DESY, Notkestraße 85, 22607 Hamburg, Germany*
Introduction
============
Discrete symmetries play an important role in particle physics. Apart from the fundamental space–time symmetries $P$, $C$ and $T$, there are various well known examples such as the so–called matter or $R$ parity in the minimal supersymmetric standard model (MSSM). There are good reasons for using discrete rather than continuous symmetries. Models with spontaneously broken global continuous symmetries exhibit Goldstone bosons which are typically phenomenologically unacceptable. Moreover, there are strong arguments that a continuous symmetry has either to be gauged or it will be broken by quantum gravity effects (see e.g. [@Banks:2010zn] for a recent discussion). In contrast to the fundamental symmetries, discrete symmetries are often just imposed by hand for phenomenological reasons. While introducing such symmetries can be a useful tool in bottom–up model building it appears worthwhile to clarify the origin of a given symmetry. Given a deeper understanding of how such symmetries arise, one might be able to obtain a more fundamental understanding of observations, such as the repetition of families and the flavor structure.
Discrete symmetries come in various classes. Various generation–dependent flavor symmetries, have been proposed in order to explain the pattern of quark and lepton Yukawa couplings, and to control higher–dimensional operators (see e.g. [@Ishimori:2010au] for a quite recent review and other contributions of this special issue [@SpecialIssue:2012x] for more references). Apart from these there are generation–independent symmetries, introduced in order to cure certain shortcomings of extensions of the standard model such as the MSSM. For example, dangerous proton decay operators are forbidden by matter parity [@Fayet:1977yc; @Dimopoulos:1981dw], baryon triality [@Ibanez:1991pr], proton hexality [@Dreiner:2005rd] and ${\ensuremath{\mathbbm{Z}_{4}}}^R$ [@Lee:2010gv]. Further, discrete symmetries of high order can manifest themselves as accidental global ${\ensuremath{\mathrm{U}(1)}}$ symmetries in the (truncated) low–energy effective theory. Such accidental symmetries can be used for example in two ways: as (anomalous) Peccei–Quinn symmetry addressing the strong CP problem (cf. the discussion in [@Choi:2006qj; @Choi:2009jt]) or as a ${\ensuremath{\mathrm{U}(1)}}_R$ explaining the hierarchy between the Planck and the electroweak scales [@Kappl:2008ie].
The purpose of this review is to clarify the origin of discrete symmetries. They can be obtained from continuous symmetries by spontaneous breaking. But this is not the only possibility. In fact, here we mainly focus on alternative possibilities for the origin of discrete symmetries. In section \[sec:GeometricalOrigin\] we discuss, in the framework of field theory, how discrete symmetries can be related to the geometry of extra dimensions. The discussion of higher–dimensional quantum field theories leaves certain questions unanswered. We therefore change gear and present a top–down derivation of discrete symmetries in section \[sec:HeteroticOrbifolds\], focusing mainly on heterotic orbifolds, as they provide us with explicit candidate models for a UV completion of the standard model, and, at the same time, allow for a CFT description and hence for a detailed understanding of the symmetries. As we shall see, the top–down settings are more restrictive than the bottom–up models. Some of the restrictions can be thought of as originating from the requirement of anomaly freedom, which we discuss separately in section \[sec:AnomalyFreedom\]. Finally, we summarize our discussion in section \[sec:Summary\].
Geometrical origin of discrete symmetries {#sec:GeometricalOrigin}
=========================================
In this section we present three possible origins of discrete symmetries. After briefly summarizing the standard approach and its limitations in section \[sec:ZNfromU1\], we discuss how to obtain a discrete symmetry from extra dimensions, either as the symmetry of compact space (section \[sec:GeometryFamily\]) or as a remnant of higher dimensional Lorentz symmetry (section \[sec:GeometryR\]).
Gauged discrete symmetries from continuous symmetries {#sec:ZNfromU1}
-----------------------------------------------------
The perhaps most straightforward possibility for obtaining a discrete symmetry is by spontaneous breaking of a continuous gauge symmetry. As a simple example, consider a ${\ensuremath{\mathrm{U}(1)}}$ gauge group broken by the VEV of a scalar $\varphi$ with charge $q=3$. Here we normalize the 1 such that the charges are integer and have no common divisor. The unbroken symmetry is given by those ${\ensuremath{\mathrm{U}(1)}}$ transformations that leave the vacuum invariant, i.e.$$\mathrm{e}^{{\mathrm{i}}\, \alpha(x)\, q}\, \langle \varphi \rangle
~=~
\mathrm{e}^{3{\mathrm{i}}\, \alpha(x)}\, \langle \varphi \rangle
~\stackrel{!}{=}~
\langle \varphi \rangle
\quad\curvearrowright\quad
\alpha(x)~=~\frac{2\pi\, n}{3}\;,$$ with $n = 0,1,2$. Hence, the (local) ${\ensuremath{\mathrm{U}(1)}}$ is broken to a (local) ${\ensuremath{\mathbbm{Z}_{3}}}$ subgroup. The extension of this discussion to the case of multiple 1 factors which get broken by several VEVs is given in [@Petersen:2009ip].
One may also get non–Abelian discrete symmetries by spontaneous breaking (cf. e.g. [@Adulpravitchai:2009kd; @Luhn:2011ip; @Merle:2011vy]). However, this typically involves very large representations of the corresponding continuous symmetry, which often give rise to unwanted states in the broken phase. Therefore, arguably, this possibility appears not to be too attractive. In what follows, we therefore discuss alternative possibilities in which the discrete symmetries are related to the geometry of compact dimensions. As we shall see, this scheme does not suffer from the above problems, and is realized in explicit string–derived models of particle physics.
Repetition of families and symmetries {#sec:GeometryFamily}
-------------------------------------
Discrete family symmetries can be motivated in settings with extra compact dimensions. It is not surprising that such models offer an explanation for the appearance of non–Abelian discrete flavor symmetries, because the latter are symmetries of certain geometrical solids, which describe the compact dimensions. The symmetries of internal space govern the interactions between fields that are localized in the compact dimensions and may eventually become flavor symmetries.
The purpose of this subsection is to explain that (non–Abelian) family symmetries can, to some extent, be understood geometrically. Let us start with a very simple example with one extra compact dimension, the orbifold $\mathbbm{S}^1/\mathbbm{Z}_2$ (figure \[fig:S1overZ2\]). See appendix \[sec:appendixA1\] for a brief introduction to the construction of orbifolds.
![Example for one extra compact dimension: $\mathbbm{S}^1/\mathbbm{Z}_2$ orbifold. Points which are related by a reflection on the dashed line are identified. The fundamental region of the orbifold is an interval with the fixed points sitting at the boundaries.[]{data-label="fig:S1overZ2"}](S1overZ2.eps)
This orbifold possesses two geometrically equivalent fixed points. Suppose there are two states, i.e. two families of quarks and/or leptons, $\psi_{m=0}$ and $\psi_{m=1}$, with identical quantum numbers, one of them localized at each of the fixed points. Since the fixed points and the states $\psi_{m}$ are geometrically indistinguishable, there is an $S_2$ permutation symmetry relating them, which manifests itself as a symmetry of the theory.
A somewhat more complex example is the tetrahedron in two extra compact dimensions (cf. [@Altarelli:2006kg]), which can be obtained from the $\mathbbm{T}^2_{\mathrm{SU}(3)}/\mathbbm{Z}_2$ orbifold (figure \[fig:tetrahedron\]). Here the subscript ${\ensuremath{\mathrm{SU}(3)}}$ indicates that the basic translations defining the $\mathbbm{T}^2$ torus enjoy the same relations as the simple roots of the Lie algebra of 3, i.e. enclose $120^\circ$ and have equal lengths.
Clearly, the tetrahedron is invariant under a discrete rotation by $120^\circ$ about an axis that goes through one corner and hits the opposite surface orthogonally. There are four operations of this type represented by $$\begin{aligned}
T~= ~\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0
\end{array}
\right)\;,\quad T\,S\;,\quad T\,S'\;,\quad T\,S\,S'\;,\end{aligned}$$ in the basis where each of the four corners is represented by a four–dimensional vector $e_i$ with $(e_i)_j = \delta_{ij}$ and $i,j=1,2,3,4$. Furthermore,
\[eq:SandSprime\] $$\begin{aligned}
S & = & {\ensuremath{\mathbbm{1}}}_{2 \times 2} \otimes \sigma_1 \;,\\
S'& = & \sigma_1 \otimes {\ensuremath{\mathbbm{1}}}_{2 \times 2}\end{aligned}$$
with the standard Pauli matrix $\sigma_1$. $T$ generates a $\mathbbm{Z}_3$ and $S$ generates a $\mathbbm{Z}_2$. In addition, one may allow for orientation–changing operations (with $\text{det}=-1$), for example, generated by $S''=\text{diag}({\ensuremath{\mathbbm{1}}}_{2 \times 2}, \sigma_1)$.
Since these generators do not commute, the multiplicative closure yields a non–Abelian discrete symmetry, being $S_4$. As mentioned in [@Altarelli:2006kg], if one restricts the allowed operations to be contained in proper Lorentz transformations, one arrives at the non–Abelian flavor symmetry generated by $T$, $S$ and $S'$, which is $A_4$. We therefore arrive at the premature conclusion that, in a model in which each fixed point carries a state, the family symmetry will be $A_4$. However, as pointed out in [@Kobayashi:2006wq] and as we shall see later in more detail the actual symmetry in UV complete settings is larger than that.
In summary, we see that extra dimensions offer a compelling explanation of non–Abelian discrete flavor symmetries. However, as the settings discussed here are based on gauge theories in more than four dimensions, one has to address the question of how to complete them in the UV. We will come back to this question in section \[sec:HeteroticOrbifolds\], where we will see that string models indeed often exhibit non–Abelian discrete family symmetries.
Discrete $\boldsymbol{R}$ symmetries {#sec:GeometryR}
------------------------------------
In supersymmetric theories there are the so–called $R$ symmetries which, by definition, do not commute with supersymmetries. Such symmetries can originate from extra dimensions as well. Specifically they are (discrete) remnants of the Lorentz symmetry of compact dimensions. The perhaps simplest way of seeing this is by recalling that under Lorentz rotations spinors, vectors and scalars transform differently such that different parts of superfields have different charges. This means, in particular, that $R$ symmetries are deeply connected to the fundamental symmetries of space–time.
Let us illustrate this point in more detail by discussing toy–settings with two compact dimensions (without discussing SUSY breaking). If these dimensions were flat (and infinite) the setup would exhibit an SO(2) rotation symmetry. For instance, this symmetry can be defined by its action on the extra components of the gauge fields, $$\label{eq:Rotation1}
\left(\begin{array}{c}A_5\\ A_6\end{array}\right)
~\to~
\left(\begin{array}{cc}
\cos\zeta & -\sin\zeta\\
\sin\zeta & \cos\zeta
\end{array}\right)\,
\left(\begin{array}{c}A_5\\ A_6\end{array}\right)\;.$$ Since such components get combined to the scalar component of a chiral superfield, describing a bulk field (or an untwisted sector field in string–derived orbifolds), it is more convenient to recast in complex notation, $$\label{eq:U156extravector}
\mathrm{U}(1)_{56}~:~
A_5+{\mathrm{i}}A_6~\to~\mathrm{e}^{{\mathrm{i}}\,\zeta}\,(A_5+{\mathrm{i}}A_6)\;.$$ On the other hand, the spinor component of this ‘untwisted superfield’ turns out to transform differently under the Lorentz group. To understand this, note that the 4D spinor $\rho$ is contained in the higher–dimensional one ($\Psi$) according to $\Psi~=~\rho\otimes\chi$, where $\chi$ is a spinorial zero mode in internal space. Recalling that spinors always rotate half as quickly as vectors under Lorentz transformations leads to the transformation law $$\label{eq:U156spinor}
\mathrm{U}(1)_{56}~:~
\rho~\to~\mathrm{e}^{{\mathrm{i}}\,\zeta/2}\,\rho\;.$$ In the 4D superfield $$\Phi~=~\frac{1}{\sqrt{2}}\,(A_5+{\mathrm{i}}\,A_6)+\sqrt{2}\,\theta\rho+
\theta\theta\,F$$ the superspace coordinates $\theta$ balance the transformations of the components and , i.e.$$\mathrm{U}(1)_{56}\::\:
\theta ~\to~
\mathrm{e}^{{\mathrm{i}}\,\zeta/2}\,\theta\;.$$ Hence, $\mathrm{U}(1)_{56}$ originating from the 6D Lorentz symmetry denotes an $R$ symmetry.
It is also clear that typically a compact space does not possess the full Lorentz symmetry. For example, orbifolds can have discrete rotational symmetries and hence can naturally provide discrete $R$ symmetries, see section \[sec:StringR\] for more details in the case of string compactifications on orbifolds.
Orbifolds and string selection rules {#sec:HeteroticOrbifolds}
====================================
So far, our discussion was purely bottom–up. It is, however, instructive to comment on the situation in top–down models. The geometrical repetition of families, as briefly discussed in section \[sec:GeometryFamily\], is a common feature of most string compactifications.
1. In heterotic orbifolds, very often families come from so–called twisted sectors, which correspond to states localized at the orbifold fixed points in the extra dimensions. We will discuss the emergent family symmetries in more detail below.
2. In $D$–brane models (see e.g. [@Blumenhagen:2006ci] for a review) the repetition of families is due to the fact that branes can wrap cycles (i.e. some directions in the extra dimensions) multiple times. Therefore, one can have non–trivial intersection numbers between different branes, leading to otherwise equivalent chiral states localized at the intersections. Therefore such models also generically exhibit non–trivial family symmetries. Also $F$ theory models have non–trivial family symmetries, which often lead to the problem that the Yukawa couplings have rank one [@Heckman:2008qa].
In what follows, we will focus on the heterotic string compactified on (toroidal) orbifolds. There are two main reasons for this choice. First of all, the heterotic framework gives rise to explicit globally consistent candidate models for physics beyond the standard model [@Lebedev:2006kn; @Lebedev:2008un; @Anderson:2011ns; @Anderson:2012yf]. Second, at the same time, this scheme is simple enough to fully understand the symmetries. Discrete symmetries can appear mainly in two ways: (i) from the compacification to 4D as remnants of higher dimensional gauge/Lorentz symmetry and (ii) from going to a special vacuum configuration where some of the fields of the 4D effective theory obtain VEVs and hence induce further symmetry breaking. The situation is schematically illustrated in figure \[fig:origin\].
![Origin of symmetries in heterotic orbifold compactifications. By compactification of six dimensions and appropriate gauge embedding the 10D super Poincaré and ${\ensuremath{\mathrm{E}_{8}}}\times{\ensuremath{\mathrm{E}_{8}}}$ symmetries get broken to the 4D super Poincaré, a 4D gauge and various discrete $R$ and non–$R$ symmetries. The latter two get further broken to subgroups by non–trivial VEVs of certain charged fields. []{data-label="fig:origin"}](HeteroticSymmetries.eps)
Orbifolds are six–dimensional compact spaces which, in contrast to a general Calabi–Yau compactification, have additional discrete symmetries which manifest themselves in the four-dimensional effective theory. Brief introductions to heterotic orbifolds and to the selection rules that govern the allowed terms of the superpotential are given in appendix \[sec:appendixA\].
Discrete symmetries from string selection rules
-----------------------------------------------
### Abelian symmetries {#sec:abeliansymmetries}
In general, there are two possible origins for Abelian (non–$R$) discrete symmetries in heterotic orbifold compactifications. Either they can arise from the space group selection rule discussed in appendix \[sec:stringselectionrules\] or as a discrete remnant of a spontaneously broken gauge symmetry. The second possibility was discussed in section \[sec:ZNfromU1\], the first one will be presented in the following.
For the sake of concreteness, we consider the $\mathbbm{S}^1/{\ensuremath{\mathbbm{Z}_{2}}}$ orbicircle of section \[sec:GeometricalOrigin\]. In this case, the space group consists of the elements $\left(\theta^k, m\,
e\right)$, where $\theta=-1$ and $k\in\{0,1\}$ describe the ${\ensuremath{\mathbbm{Z}_{2}}}$ reflection, $m\in{\ensuremath{\mathbbm{Z}_{}}}$, $e=2\pi\, R$ and $R$ denotes the radius of the circle $\mathbbm{S}^1$. As illustrated in figure \[fig:S1overZ2\], the integer $m$ specifies the location of the twisted state (or ‘brane field’), which have $k=1$, as opposed to untwisted states (or ‘bulk fields’) which have $k=0$. The space group selection rule requires the product of space–group elements of the states involved in a coupling to be congruent to identity (see appendix \[sec:stringselectionrules\]). This gives rise to an Abelian ${\ensuremath{\mathbbm{Z}_{2}}}^k\times{\ensuremath{\mathbbm{Z}_{2}}}^m$ symmetry, i.e. $$\label{eqn:AbelianPartOrbicircle}
\prod_{r=1}^L g_r~=~ {\ensuremath{\mathbbm{1}}}\quad\curvearrowright\quad
\left\{\begin{array}{lcl}
{\ensuremath{\mathbbm{Z}_{2}}}^k & : & \displaystyle\sum_{r=1}^L k^{(r)}~=~0\mod 2\;,\\
{\ensuremath{\mathbbm{Z}_{2}}}^m & : & \displaystyle\sum_{r=1}^L m^{(r)}~=~0\mod 2\;.
\end{array}\right.$$ We will refer to the condition on $k^{(r)}$ as the point group selection rule and to the second one on $m^{(r)}$ as the $m$–rule.
### Non–Abelian symmetries {#sec:nonabeliansymmetries}
A particularly interesting situation arises if the two fixed points at $m=0$ and $1$ are equivalent, which happens to be the case unless one introduces a non–trivial background field (either a so–called discrete Wilson line [@Ibanez:1986tp] or the $B$-field (discrete torsion) [@Vafa:1986wx]). In this case there is an additional $S_2$ permutation symmetry that interchanges $m=0$ and $m=1$. As we shall discuss now, together with the ${\ensuremath{\mathbbm{Z}_{2}}}^k\times{\ensuremath{\mathbbm{Z}_{2}}}^m$ symmetry discussed above in section \[sec:abeliansymmetries\], this leads to a non–Abelian discrete symmetry $D_4$ [@Dixon:1986qv; @Kobayashi:2006wq].
We combine a state from the fixed point at $m=0$ and a state from the one at $m=1$ into a two–dimensional vector, i.e. a doublet. From equation \[eqn:AbelianPartOrbicircle\] we see that the space group selection rule is generated in this basis by the elements $$-{\ensuremath{\mathbbm{1}}}_{2\times2}~=~\left(\begin{array}{cc}-1 & 0 \\ 0 & -1\\\end{array}\right)\;, \quad
\sigma_3~ =~ \left(\begin{array}{cc}1 & 0 \\ 0 & -1\\\end{array}\right)\;,$$ i.e. the element $-{\ensuremath{\mathbbm{1}}}_{2 \times 2}$ generates ${\ensuremath{\mathbbm{Z}_{2}}}^k$, i.e. the point group selection rule, and $\sigma_3$ generates ${\ensuremath{\mathbbm{Z}_{2}}}^m$, i.e. the $m$–rule. The additional element that generates the permutation of the two states is given by $$\sigma_1~=~\left(\begin{array}{cc}0 & 1 \\ 1 & 0\\\end{array}\right)\;.$$ The multiplicative closure of these three elements yields a non–Abelian group with eight elements $\{{\ensuremath{\mathbbm{1}}}_{2 \times 2}, -{\ensuremath{\mathbbm{1}}}_{2 \times 2}, \pm\sigma_1,
\pm\sigma_3, \pm {\mathrm{i}}\sigma_2\}$ and is known as the dihedral group $D_4$, associated with the symmetry of a square.
Similar to the $\mathbbm{S}^1/{\ensuremath{\mathbbm{Z}_{2}}}$ case, the $\mathbbm{T}^2/{\ensuremath{\mathbbm{Z}_{2}}}$ orbifold without Wilson lines (see figure \[fig:fixedpoints\]) generically has a $(D_4\times D_4)/{\ensuremath{\mathbbm{Z}_{2}}}$ flavor symmetry which originates from the Abelian space group selection rule ${\ensuremath{\mathbbm{Z}_{2}}}^3$ combined with the permutation symmetries $S$ and $S'$ of equation \[eq:SandSprime\]. It can be enhanced further for special values of the angle and the two radii of $\mathbbm{T}^2/{\ensuremath{\mathbbm{Z}_{2}}}$. For example, when the orbifold geometrically is a tetrahedron the naive geometrical $S_4$ symmetry obtained from field theory considerations in section \[sec:GeometryFamily\] gets enhanced to $\text{SW}_4$, which has 192 elements, by the stringy space group selection rule [@Kobayashi:2006wq]. If one allows only for proper Lorentz transformations, one obtains a group with 96 elements which is contained in $\text{SW}_4$. The string description allows us to clarify whether or not one should consider operations which are not contained in the proper Lorentz transformations. The couplings between states localized at different fixed points go like $\mathrm{e}^{-a\,T}$, where $T$ denotes the Kähler modulus of the corresponding orbifold plane. The real part of $T$ is proportional to $R^2$, where $R$ is the radius of the underlying torus. Clearly, $R^2$ does not change under these extra reflections, such that the absolute values of the coupling strengths will enjoy the larger symmetry $\text{SW}_4$. On the other hand, the imaginary part of $T$, the so–called $T$–axion, is related to the anti–symmetric tensor field in compact space, and does change its sign under the extra reflections. Hence, if the $T$–axion acquires a non–trivial VEV, the phases of the coupling strengths do no longer enjoy the larger symmetries. As is well known, breaking the reflection symmetries in internal space can be related to CP violation in the effective 4D theory (cf. [@Dine:1992ya; @Kobayashi:2003gf]), and what we discussed here is just an example for this statement. Note that there are different possibilities to obtain non–trivial CP phases, also based on non–Abelian discrete symmetries (cf. [@Chen:2009gf]). It should be interesting to see if these also have an interpretation in terms of reflection symmetries in compact space.
Different lower–dimensional building blocks of orbifolds lead to other non–Abelian discrete symmetries (table \[tab:Survey\]).
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
orbifold flavor symmetry sector string fundamental states
---------------------------------- ---------------------------------------------- ----------- ---------------------------------------------------------------------------------------------
$\mathbbm{S}^1/\mathbbm{Z}_2$ $D_4$ $U$ $\boldsymbol{1}$
$T_1$ [**2**]{}
$\mathbbm{T}^2/\mathbbm{Z}_2$ $(D_4\times D_4)/\mathbbm{Z}_2$ $U$ $\boldsymbol{1}$
$T_1$ $\boldsymbol{4}$
$\mathbbm{T}^2/\mathbbm{Z}_3$ $\Delta(54)$ $U$ $\boldsymbol{1}$
$T_1$ ${\bf 3}$
$T_2$ ${\bar {\bf 3}}$
$\mathbbm{T}^2/\mathbbm{Z}_4$ $U$ $\boldsymbol{1}$
$(D_4 \times \mathbbm{Z}_4)/\mathbbm{Z}_2$ $T_1$ [**2**]{}
$T_2$ $\boldsymbol{1}_{A_1} + \boldsymbol{1}_{B_1} +\boldsymbol{1}_{B_2} + \boldsymbol{1}_{A_2}$
$T_3$ [**2**]{}
$\mathbbm{T}^4/\mathbbm{Z}_8$ $U$ $\boldsymbol{1}$
$T_1$ [**2**]{}
$(D_4 \times \mathbbm{Z}_8) / \mathbbm{Z}_2$ $T_2$ $ \boldsymbol{1}_{A_1} + \boldsymbol{1}_{B_1} +\boldsymbol{1}_{B_2} + \boldsymbol{1}_{A_2}$
$T_3$ [**2**]{}
$T_4$ $4 \times (\boldsymbol{1}_{A_1} + \boldsymbol{1}_{B_1}
+\boldsymbol{1}_{B_2} + \boldsymbol{1}_{A_2})$
$\mathbbm{T}^4/\mathbbm{Z}_{12}$ trivial
$\mathbbm{T}^6/\mathbbm{Z}_7$ $U$ $\boldsymbol{1}$
$S_7 \ltimes (\mathbbm{Z}_7)^6$ $T_k$ $\boldsymbol{7}$
$T_{7-k}$ ${\bar {\bf 7}}$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Survey of flavor symmetries arising from building blocks of orbifolds (from [@Kobayashi:2006wq]). The $T_k$ denote the various twisted sectors and $U$ the untwisted sector.[]{data-label="tab:Survey"}
The (non–Abelian) flavor symmetry could be broken in two ways: (i) explicitly: the presence of orbifold Wilson lines breaks the permutation symmetry, at least partially. If the permutation symmetry is completely broken, the remaining flavor group is Abelian. (ii) spontaneously: by the VEV of some twisted field, since twisted fields necessarily transform in a non–singlet representation under the flavor group. For example, $\Delta(54)$ can be broken to $S_3$ by the VEV of a triplet $\boldsymbol{3}$ (e.g. $\langle\boldsymbol{3}\rangle=(v,v,v)$).
### $\boldsymbol{R}$ symmetries {#sec:StringR}
As already mentioned, discrete $R$ symmetries could arise as discrete remnants of the Lorentz symmetry of compact dimensions. This is also true for string–derived orbifold models.
What are the $R$ charges of states localized at the fixed points? In the framework of field theory one cannot answer this question unambiguously. For instance, in many field–theoretic analysis the profiles of these fields are taken to be $\delta$–functions with support at the fixed points, from which one may conclude that the states transform trivially under the discrete $R$ symmetries. It turns out that the naive field–theoretic expectation is incorrect. However, in string theory one can address this question. Specifically, in heterotic orbifolds the $R$ symmetries derive from the $H$–momentum conservation law and one can determine the $R$ charges unambiguously. We will discuss an explicit example in section \[sec:NonPerturbativeViolation\].
Discrete symmetries in explicit models
--------------------------------------
Having seen how discrete symmetries arise in the effective field–theoretic description, we will now discuss which symmetries appear in explicit string models.
### Flavor symmetries
In recent years, many MSSM candidate models have emerged from heterotic orbifolds [@Buchmuller:2005jr; @Buchmuller:2006ik; @Lebedev:2006kn; @Lebedev:2007hv], known as the “heterotic mini–landscape”. These models have a common flavor structure: focusing on the two–torus where a ${\ensuremath{\mathbbm{Z}_{2}}}$ acts, the two light generations are localized on equivalent fixed points and the third one is in the bulk. Therefore, as discussed above, there is a $D_4$ flavor symmetry, under which the two light generations transform as a doublet whereas the the third family transforms trivially (Let us mention that there are also alternative models without this $D_4$ [@Kim:2007mt; @Lebedev:2008un]). This symmetry is broken in potentially realistic vacua by the VEVs of some localized singlets. Yet, using the $D_4$ symmetric situation as a starting point and then considering corrections can have certain advantages when discussing the (supersymmetric) flavor structure (cf. [@Ko:2007dz]). The emerging scheme is somewhat similar to the one of ‘minimal flavor violation’ [@Chivukula:1987py; @Buras:2000dm; @D'Ambrosio:2002ex]. In particular, the structure of the soft masses is $$\widetilde{m}^2~=~\left(\begin{array}{ccc}
a & 0 & 0\\
0 & a & 0\\
0 & 0 & b
\end{array}\right)+\text{terms proportional to $D_4$ breaking VEVs}\;.$$ It is known that such an approximate form of the soft masses makes it possible to avoid the supersymmetric flavor problems. In addition, it naturally allows for scenarios in which the third family of squarks and sleptons is substantially lighter than the first two generations of superpartners (cf. the discussion in [@Krippendorf:2012ir]).
### Flavor–independent symmetries
In grand unified models, matter or $R$ parity can be obtained from baryon–minus–lepton–number symmetry ${\ensuremath{\mathrm{U}(1)}}_{B-L}$ by spontaneous breaking, and the same is true in string–derived models [@Lebedev:2007hv], the only difference being that ${\ensuremath{\mathrm{U}(1)}}_{B-L}$ is not in GUT normalization and no large representations (such as $\overline{\boldsymbol{126}}$–plets of [$\mathrm{SO}(10)$]{}) are required (nor available) to achieve the breaking ${\ensuremath{\mathrm{U}(1)}}_{B-L}\to{\ensuremath{\mathbbm{Z}_{2}}}^\mathcal{M}$. That is, string theory avoids huge representations like the $\overline{\boldsymbol{126}}$–plets, but still allows us to derive matter parity from a local $B-L$ symmetry.
Similarly, proton hexality can be obtained from Pati–Salam (PS) times an extra ${\ensuremath{\mathrm{U}(1)}}$ symmetry [@Forste:2010pf]. Explicit orbifold models from ${\ensuremath{\mathbbm{Z}_{4}}}\times{\ensuremath{\mathbbm{Z}_{4}}}$ compactifications using a local GUT approach, $${\ensuremath{\mathrm{E}_{8}}}\text{ in 10D}\to{\ensuremath{\mathrm{SO}(12)}}\text{ in 6D}\to\text{PS}\times{\ensuremath{\mathrm{U}(1)}}\to\text{SM in 4D}\;,$$ revealed 850 heterotic MSSMs (i.e. three generations of quarks and leptons plus vector–like exotics), many of them with the correct proton hexality charge assignment for at least some quarks and leptons [@Forste:2010pf].
### $\boldsymbol{R}$ symmetries {#sec:ExplicitModelsR}
$R$ symmetries play an important role in string models. In particular, approximate continuous $R$ symmetries, which derive from exact discrete $R$ symmetries, can explain the large hierarchy between the Planck, GUT and/or string scales on the one hand and the electroweak and/or supersymmetry breaking scales on the other hand. It has been demonstrated that, in the presence of a continuous $R$ symmetry, at field configurations that satisfy the $F$–term constraints, the VEV of the superpotential vanishes [@Kappl:2008ie]. If there is an approximate $R$ symmetry that gets explicitly broken at some high order $N$, the vacuum expectation value of the superpotential, or equivalently the gravitino mass $m_{3/2}$, goes like $$\label{eq:WKappl}
\langle\mathscr{W}\rangle~\sim~\langle s\rangle^N\;,$$ where $\langle s\rangle$ denotes a typical size of a VEV of fields that break the symmetry spontaneously (in Planck units) and $N$ is of the order 10 in explicit examples. Further, in the context of the MSSM it has been shown that in settings in which matter charges are consistent with grand unification, the only anomaly–free symmetries that can forbid the $\mu$ term are $R$ symmetries [@Lee:2011dya]. Given that $\langle\mathscr{W}\rangle$, or, equivalently $m_{3/2}$, is the order parameter of $R$ symmetry breaking, this yields a relation between $\mu$ and $m_{3/2}$ [@Kappl:2008ie; @Brummer:2010fr], i.e.constitutes a solution to the $\mu$ problem. This solution does, unlike the Giudice–Masiero mechanism [@Giudice:1988yz], not rely on a specific structure of the Kähler potential, rather it provides a holomorphic $\mu$ term of the right size, similar to the Kim–Nilles picture [@Kim:1983dt].
Anomaly Freedom {#sec:AnomalyFreedom}
===============
Anomaly constraints vs. embedding constraints
---------------------------------------------
How can one derive anomaly constraints on discrete symmetries? It is instructive to review how they have been derived in the past. Ibáñez and Ross [@Ibanez:1991hv] have used the following strategy: they have obtained ${\ensuremath{\mathbbm{Z}_{N}}}$ symmetries from 1 by spontaneous breaking, as discussed in section \[sec:ZNfromU1\]. It is obvious that, if the 1 is non–anomalous, and the spontaneous breaking is done consistently, then also [$\mathbbm{Z}_{N}$]{} is anomaly–free. However, one may question whether these are in general true anomaly constraints or rather embedding constraints, i.e. constraints that restrict the choice of the non–anomalous continuous gauge group into which the discrete group is supposed to be embedded.
Araki [@Araki:2006mw] proposed an alternative derivation of the anomaly constraints, which does not rely on embedding the discrete symmetry into a continuous one, but by using the path integral method [@Fujikawa:1979ay]. This strategy has been applied to the [$\mathbbm{Z}_{N}$]{} case [@Araki:2008ek] with the result that all Ibáñez–Ross constraints apply except for the ${\ensuremath{\mathbbm{Z}_{N}}}^3$ ones, which are known not to constitute true anomaly constraints [@Banks:1991xj; @Lee:2011dya].
Also discrete anomaly constraints for non–Abelian discrete symmetries have first been derived by using the embedding strategy [@Frampton:1994rk] (see [@Luhn:2008sa] for a more recent discussion). While, again, these constraints ensure anomaly freedom, they turn out to be, in general, not true anomaly constraints but rather embedding constraints. That is, if these constraints are satisfied, the symmetry is anomaly free, but the converse is not necessarily true. In particular, the constraints can depend on the choice of the continuous symmetry into which the discrete one is supposed to be embedded. The true constraints can be derived with the path integral method [@Araki:2006mw], and one finds that one only has to check anomaly freedom for the Abelian subgroups of a given non–Abelian symmetry [@Araki:2006mw; @Araki:2008ek]. For a discrete group $D$ and a continuous gauge symmetry $G$ one obtains the conditions that $$\begin{aligned}
\sum_{(\boldsymbol{r}^{(f)},\boldsymbol{d}^{(f)})}
\delta^{(f)}\cdot \ell(\boldsymbol{r}^{(f)})
& \stackrel{!}{=} &
0\mod \frac{N}{2}\;,\label{eq:NAcondition-gauge2}\end{aligned}$$ where the sum ‘$\sum_{(\boldsymbol{r}^{(f)},\boldsymbol{d}^{(f)})} $’ is over representations which are non–trivial w.r.t.to both $G$ and $D$. The discrete Abelian charge, denoted by $\delta^{(f)}$, can be expressed in terms of the group elements $U(\boldsymbol{d}^{(f)})$ as $$\label{eq:trtaui}
\delta^{(f)}~=~N\,\frac{\ln \det U(\boldsymbol{d}^{(f)})}{2\pi\,{\mathrm{i}}}\;.$$ For the mixed gravitational–$D$ anomaly one finds $$\begin{aligned}
\sum_{\boldsymbol{d}^{(f)}}\delta^{(f)}
& \stackrel{!}{=} &
0\mod \frac{N}{2}\;,
\label{eq:NAcondition-grav2}\end{aligned}$$ where the symbol ‘$\sum_{\boldsymbol{d}^{(f)}}$’ means that the sum extends over all non–trivial representations $\boldsymbol{d}^{(f)}$ of $D$. What does it mean if a given discrete symmetry does not satisfy these constraints? In general, one may argue that in such a case the symmetry will be broken in an uncontrollable way and all the predictive power of the (discrete) symmetry will be lost. For useful applications in particle physics, reliable discrete symmetries should thus be anomaly free. There is, however, an exception: for the anomalous symmetry the anomalies might be cancelled (microscopically) by a discrete Green-Schwarz mechanism. In what follows, we shall discuss this possibility in detail.
Non–perturbative “violation” of discrete symmetries and discrete Green–Schwarz anomaly cancellation {#sec:NonPerturbativeViolation}
---------------------------------------------------------------------------------------------------
As in the case of continuous symmetries, discrete anomalies can be cancelled by a Green–Schwarz (GS) mechanism (for a discussion in the path integral formalism see [@Lee:2011dya]). Also here this requires the presence of a scalar, the GS axion, which multiplies some $F_{\mu\nu}\widetilde{F}^{\mu\nu}$ terms (with $F^{\mu\nu}$ denoting the field strength of some continuous gauge symmetry of the model), and shifts under the discrete symmetry. Once the axion acquires its vacuum expectation value, the discrete symmetry gets broken spontaneously. Effectively this leads to a situation in which the (anomalous part of the) discrete group appears to be broken by non–perturbative effects.[^1]
As an example, consider the ${\ensuremath{\mathbbm{Z}_{4}}}^R$ symmetry discussed in [@Lee:2010gv; @Lee:2011dya]. It forbids the $\mu$ term and dimension 4 and 5 proton decay operators at the perturbative level. It appears to be broken by non–perturbative effects to its ‘non–anomalous’ subgroup, i.e. to ${\ensuremath{\mathbbm{Z}_{2}}}^\mathcal{M}$ matter parity. The order parameter of this $R$ symmetry breaking is the vacuum expectation value of the superpotential, i.e. the gravitino mass. One therefore has, in the context of gravity mediation, a $\mu$ term of the correct size (cf. the analogous discussion in section \[sec:ExplicitModelsR\]) while dimension five proton decay remains far below the experimental limits.
Similar to the case of $R$ symmetries, also non–$R$ symmetries can appear anomalous and hence be broken non–perturbatively. This, again, introduces a hierarchically small breaking of the discrete symmetry. It remains to be seen whether this mechanism can provide us with solutions to some of the open questions in flavor physics.
Summary {#sec:Summary}
=======
The flavor structure of the SM remains one of the greatest puzzles in particle physics. Flavor symmetries appear to be instrumental for solving this puzzle. Optimistically one may hope to find a compelling model that explains the observed flavor structure. In this case the question where the underlying family symmetries originate from is of greatest importance since given a deeper understanding may allow us to relate the observed fermion masses and mixing to some fundamental properties of our world.
In this paper we have reviewed the possible origin of discrete symmetries, paying particular attention to discrete flavor symmetries. Discrete symmetries can arise from continuous symmetries by spontaneous breaking or from extra dimensions. While for Abelian symmetries the first option is a very common tool in model building, we have argued that obtaining non–Abelian discrete symmetries from continuous ones (in four dimensions) does not lead to compelling models. On the other hand, non–Abelian discrete symmetries do arise in models with extra dimensions, where they are deeply connected to the explanation of the repetition of families. In particular, in stringy extensions of the standard model such symmetries often arise. Therefore they can play an important role in understanding or addressing the flavor puzzle in the standard model as well as in solving flavor problems in extensions such as the MSSM.
We have also commented on discrete anomalies, which constrain possible discrete symmetries in bottom–up model building. As we have pointed out, one should carefully distinguish between embedding constraints and true anomaly constraints. Discrete symmetries that appear anomalous open very attractive possibilities in model building as they appear to be broken non–perturbatively, i.e. the breaking can be hierarchically small. This observation has been applied to the $\mu$ parameter of the MSSM. It remains to be seen whether hierarchies in flavor physics can have a similar explanation.
Acknowledgments {#acknowledgments .unnumbered}
---------------
One of us (M.R.) would like to thank Mu–Chun Chen for useful discussions and the UC Irvine, where part of this work was done, for hospitality. This work was partially supported by the SFB–Transregio TR33 “The Dark Universe” (Deutsche Forschungsgemeinschaft), the SFB 676, the European Union 7th network program “Unification in the LHC era” (PITN–GA–2009–237920) and the DFG cluster of excellence “Origin and Structure of the Universe” (Deutsche Forschungsgemeinschaft).
Orbifolds {#sec:appendixA}
=========
We give a brief introduction to orbifolds following [@Vaudrevange:2008sm; @RamosSanchez:2008tn]. We start with the geometrical construction in appendix \[sec:appendixA1\]. In appendix \[sec:appendixA2\] we depict how heterotic strings are compactified on orbifolds and appendix \[sec:stringselectionrules\] reviews string selection rules.
Construction of orbifolds {#sec:appendixA1}
-------------------------
From the geometrical point of view, a $d$–dimensional (toroidal) orbifold is defined as the quotient of $\mathbbm{R}^d$ divided by a discrete group $S$, called the space group. For ${\ensuremath{\mathbbm{Z}_{N}}}$ orbifolds, the elements of the space group $g \in S$ are given by $$g~=~\left(\theta^k, n_\alpha\, e_\alpha\right)
\qquad\text{and act as}\qquad
g\, X~=~\theta^k\, X + n_\alpha\, e_\alpha\;,$$ with sum over $\alpha=1,\ldots,d$ and $X \in \mathbbm{R}^d$. The $d$ (linearly independent) vectors $e_\alpha$ generate a lattice $\Gamma$ and hence define a torus $\mathbbm{T}^d$. The rotation $\theta$ is of order $N$ (i.e. $\theta^N =
{\ensuremath{\mathbbm{1}}}$) and is chosen to be an automorphism of $\Gamma$. Then the action of $S$ is not free, i.e. there are fixed points $X_g\in\mathbbm{R}^d$ with $g X_g = X_g$ for some $g \in S$. The space group element $g$ associated to the fixed point $X_g$ is called the constructing element, see figure \[fig:fixedpoints\]. The resulting orbifold is written as $\mathbbm{T}^d/{\ensuremath{\mathbbm{Z}_{N}}}$.
fixedpoints.pstex\_t
Strings on orbifolds {#sec:appendixA2}
--------------------
Compactifying the heterotic string on six–dimensional orbifolds yields three different classes of closed strings: (i) untwisted strings with constructing element $\left({\ensuremath{\mathbbm{1}}}, 0\right)$ which would also close in uncompactified space, (ii) winding modes with constructing elements $\left({\ensuremath{\mathbbm{1}}}, n_\alpha
e_\alpha\right)$ which would also close on the torus and (iii) twisted strings, localized at the fixed points, with constructing elements $\left(\theta^k,
n_\alpha e_\alpha\right)$ with $k \neq 0$ which only close on the orbifold due to the $\theta$ rotation. The winding modes are massive with masses near the Planck scale. Since we are only interested in the low–energy effective action they are ignored in the following.
The geometrical action of the space group has to be amended by an action on the gauge degrees of freedom of the heterotic string in order to fulfill the stringy consistency conditions of modular invariance. In the standard approach this is achieved by so called shifts and Wilson lines. Specifying these input parameters completely defines an orbifold compactification and allows one to compute the massless spectrum. An elegant way to obtain consistent orbifold models, for example MSSM–like models, to compute their massless spectra and to analyze their resulting four–dimensional effective theories is given by the public code “Orbifolder” [@Nilles:2011aj].
String selection rules {#sec:stringselectionrules}
----------------------
The CFT description allows one to compute scattering amplitudes of strings on orbifolds. In the four–dimensional effective theory these amplitudes enter as coupling strengths of allowed terms in the superpotential. Their computation is technically involved. Hence, at a first step one is only interested in the string selection rules determining which coupling is allowed or forbidden. In many cases the string selection rules can be interpreted as a symmetry of the four–dimensional effective theory. The (standard) string selection rules are:
1. [Gauge invariance]{}
2. [Space group selection rule: ]{} The space group selection rule reflects the geometrical possibility of orbifold strings to join. Consider $L$ strings with constructing elements $g_r = \left(\theta^{k^{(r)}}, n_\alpha^{(r)}e_\alpha\right)$. Then the coupling is allowed if $\prod_{r=1}^L g_r = {\ensuremath{\mathbbm{1}}}$, see figure \[fig:spacegroupselectionrule\].
spacegroupselectionrule.pstex\_t
3. [$R$ charge conservation: ]{} $R$ charge conservation is a discrete remnant of ten–dimensional Lorentz symmetry. It arises whenever the orbifold $\mathbbm{R}^6/S$ respects some additional rotational symmetry beside $\theta =
\text{diag}(\mathrm{e}^{2\pi{\mathrm{i}}\, v_1},\mathrm{e}^{2\pi{\mathrm{i}}\, v_2},
\mathrm{e}^{2\pi{\mathrm{i}}\, v_3})$. For example, for a factorized orbifold, i.e. an orbifold whose lattice $\Gamma$ is the direct product of three two–dimensional lattices $\Gamma = \Gamma_1\times\Gamma_2\times\Gamma_3$, a rotation in the sublattice $\Gamma_i$ by $\mathrm{e}^{2\pi{\mathrm{i}}\, v_i}$ is a symmetry of the theory. The rotation by $v_i$ is of order $N_i$ (i.e. $N_i v_i \in{\ensuremath{\mathbbm{Z}_{}}}$) and results in a ${\ensuremath{\mathbbm{Z}_{2 N_i}}}^R$ symmetry $$\sum_{r=1}^L -2R_r^i ~=~ 2 \mod 2N_i\;,$$ where $R_r^i = q_{\text{sh},r}^i - \tilde{N}^i_r + \tilde{N}^{\bar\imath}_r$ with the oscillator numbers $\tilde{N}^i_r$ and $\tilde{N}^{\bar\imath}_r$ (see e.g. [@Buchmuller:2006ik] for their definition), $q_{\text{sh},r}$ are the bosonic right–moving momenta and the factor $-2$ originates from the normalization such that fermions have a shifted $R$ charge by $-1$.
If the two–dimensional lattice $\Gamma_i$ has a higher symmetry than $N_i$ there is an additional string selection rule known as “rule 4”. For example, the ${\ensuremath{\mathrm{SU}(3)}}^3$ root lattice of a ${\ensuremath{\mathbbm{Z}_{3}}}$ orbifold allows for ${\ensuremath{\mathbbm{Z}_{6}}}$ sublattice rotations. If all strings involved in a given interaction sit at the same fixed point they feel the higher symmetry and the $R$ symmetry is enhanced to ${\ensuremath{\mathbbm{Z}_{4
N_i}}}^R$.
[10]{}
T. Banks and N. Seiberg, Phys.Rev. **D83** (2011), 084019, `arXiv:1011.5120` \[hep-th\]. H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada, et al., Prog.Theor.Phys.Suppl. **183** (2010), 1, `arXiv:1003.3552` \[hep-th\]. , editors J. Valle et al., Fortschritte der Physik (2012 to appear).
P. Fayet, Phys. Lett. **B69** (1977), 489. S. Dimopoulos, S. Raby, and F. Wilczek, Phys. Lett. **B112** (1982), 133. L. E. Ib[á]{}[ñ]{}ez and G. G. Ross, Nucl. Phys. **B368** (1992), 3. H. K. Dreiner, C. Luhn, and M. Thormeier, Phys. Rev. **D73** (2006), 075007, `hep-ph/0512163`. H. M. Lee, S. Raby, M. Ratz, G. G. Ross, R. Schieren, K. Schmidt-Hoberg, and P. K. Vaudrevange, Phys.Lett. **B694** (2011), 491, `arXiv:1009.0905` \[hep-ph\].
K.-S. Choi, I.-W. Kim, and J. E. Kim, JHEP **03** (2007), 116, `hep-ph/0612107`. K.-S. Choi, H. P. Nilles, S. Ramos-Sánchez, and P. K. Vaudrevange, Phys.Lett. **B675** (2009), 381, `arXiv:0902.3070` \[hep-th\].
R. Kappl, H. P. Nilles, S. Ramos-S[á]{}nchez, M. Ratz, K. Schmidt-Hoberg, and P. K. Vaudrevange, Phys. Rev. Lett. **102** (2009), 121602, `arXiv:0812.2120` \[hep-th\]. B. Petersen, M. Ratz, and R. Schieren, JHEP **08** (2009), 111, `arXiv:0907.4049` \[hep-ph\]. A. Adulpravitchai, A. Blum, and M. Lindner, JHEP **0909** (2009), 018, `arXiv:0907.2332` \[hep-ph\]. C. Luhn, JHEP **1103** (2011), 108, `arXiv:1101.2417` \[hep-ph\]. A. Merle and R. Zwicky, JHEP **1202** (2012), 128, `arXiv:1110.4891` \[hep-ph\]. G. Altarelli, F. Feruglio, and Y. Lin, Nucl.Phys. **B775** (2007), 31, `arXiv:hep-ph/0610165` \[hep-ph\]. T. Kobayashi, H. P. Nilles, F. Pl[ö]{}ger, S. Raby, and M. Ratz, Nucl. Phys. **B768** (2007), 135, `hep-ph/0611020`. R. Blumenhagen, B. K[ö]{}rs, D. L[ü]{}st, and S. Stieberger, Phys. Rept. **445** (2007), 1, `hep-th/0610327`. J. J. Heckman and C. Vafa, Nucl.Phys. **B837** (2010), 137, `arXiv:0811.2417` \[hep-th\]. O. Lebedev, H. P. Nilles, S. Raby, S. Ramos-S[á]{}nchez, M. Ratz, P. K. S. Vaudrevange, and A. Wingerter, Phys. Lett. **B645** (2007), 88, `hep-th/0611095`. O. Lebedev, H. P. Nilles, S. Ramos-Sánchez, M. Ratz, and P. K. S. Vaudrevange, Phys. Lett. **B668** (2008), 331, `arXiv:0807.4384` \[hep-th\]. L. B. Anderson, J. Gray, A. Lukas, and E. Palti, Phys.Rev. **D84** (2011), 106005, `arXiv:1106.4804` \[hep-th\], 19 pages. References Added. L. B. Anderson, J. Gray, A. Lukas, and E. Palti, `arXiv:1202.1757` \[hep-th\]. L. E. Ib[á]{}[ñ]{}ez, H. P. Nilles, and F. Quevedo, Phys. Lett. **B187** (1987), 25. C. Vafa, Nucl. Phys. **B273** (1986), 592. L. J. Dixon, D. Friedan, E. J. Martinec, and S. H. Shenker, Nucl. Phys. **B282** (1987), 13. M. Dine, R. G. Leigh, and D. A. MacIntire, Phys.Rev.Lett. **69** (1992), 2030, `arXiv:hep-th/9205011` \[hep-th\]. T. Kobayashi and O. Lebedev, Phys. Lett. **B565** (2003), 193, `hep-th/0304212`. M.-C. Chen and K. Mahanthappa, Phys.Lett. **B681** (2009), 444, `arXiv:0904.1721` \[hep-ph\]. W. Buchm[ü]{}ller, K. Hamaguchi, O. Lebedev, and M. Ratz, Phys. Rev. Lett. **96** (2006), 121602, `hep-ph/0511035`. W. Buchm[ü]{}ller, K. Hamaguchi, O. Lebedev, and M. Ratz, Nucl. Phys. **B785** (2007), 149, `hep-th/0606187`. O. Lebedev, H. P. Nilles, S. Raby, S. Ramos-S[á]{}nchez, M. Ratz, P. K. S. Vaudrevange, and A. Wingerter, Phys. Rev. **D77** (2007), 046013, `arXiv:0708.2691 [hep-th]`. J. E. Kim, J.-H. Kim, and B. Kyae, JHEP **06** (2007), 034, `hep-ph/0702278`. P. Ko, T. Kobayashi, J.-h. Park, and S. Raby, Phys. Rev. **D76** (2007), 035005, `arXiv:0704.2807 [hep-ph]`. R. S. Chivukula and H. Georgi, Phys. Lett. **B188** (1987), 99. A. J. Buras, P. Gambino, M. Gorbahn, S. J[ä]{}ger, and L. Silvestrini, Phys. Lett. **B500** (2001), 161, `hep-ph/0007085`. G. D’Ambrosio, G. F. Giudice, G. Isidori, and A. Strumia, Nucl. Phys. **B645** (2002), 155, `hep-ph/0207036`. S. Krippendorf, H. P. Nilles, M. Ratz, and M. W. Winkler, `arXiv:1201.4857` \[hep-ph\]. S. F[ö]{}rste, H. P. Nilles, S. Ramos-S[á]{}nchez, and P. K. Vaudrevange, Phys.Lett. **B693** (2010), 386, `arXiv:1007.3915` \[hep-ph\]. H. M. Lee, S. Raby, M. Ratz, G. G. Ross, R. Schieren, et al., Nucl.Phys. **B850** (2011), 1, `arXiv:1102.3595` \[hep-ph\]. F. Br[ü]{}mmer, R. Kappl, M. Ratz, and K. Schmidt-Hoberg, JHEP **04** (2010), 006, `arXiv:1003.0084` \[hep-ph\]. G. F. Giudice and A. Masiero, Phys. Lett. **B206** (1988), 480. J. E. Kim and H. P. Nilles, Phys. Lett. **B138** (1984), 150. L. E. Ib[á]{}[ñ]{}ez and G. G. Ross, Phys. Lett. **B260** (1991), 291. T. Araki, Prog. Theor. Phys. **117** (2007), 1119, `hep-ph/0612306`. K. Fujikawa, Phys. Rev. Lett. **42** (1979), 1195. T. Araki et al., Nucl. Phys. **B805** (2008), 124, `arXiv:0805.0207` \[hep-th\]. T. Banks and M. Dine, Phys. Rev. **D45** (1992), 1424, `hep-th/9109045`. P. H. Frampton and T. W. Kephart, Int. J. Mod. Phys. **A10** (1995), 4689, `hep-ph/9409330`. C. Luhn and P. Ramond, JHEP **0807** (2008), 085, `arXiv:0805.1736` \[hep-ph\]. P. K. S. Vaudrevange, `arXiv:0812.3503` \[hep-th\]. S. Ramos-S[á]{}nchez, Fortsch.Phys. **10** (2009), 907, `arXiv:0812.3560` \[hep-th\], Ph.D.Thesis (Advisor: H.P. Nilles). H. P. Nilles, S. Ramos-S[á]{}nchez, P. K. Vaudrevange, and A. Wingerter, Comput.Phys.Commun. **183** (2012), 1363, `arXiv:1110.5229` \[hep-th\], 29 pages, web page http://projects.hepforge.org/orbifolder/.
[^1]: Non–perturbative effects generate couplings of the form $\exp (-{\mathrm{i}}a) \phi_1 \ldots \phi_n$, where $a$ denotes the GS axion and the $\phi_i$ some (matter) fields of the theory. Such terms are invariant under the full discrete group when one takes the shift transformation of the GS axion $a$ into account. But, when $a$ obtains a vacuum expectation value, the (‘anomalous’ part of the) discrete group is broken spontaneously.
|
---
abstract: 'The operator product expansion in four-dimensional superconformal field theory is discussed. It is demonstrated that the OPE takes a particularly simple form for certain classes of operators. These are chiral operators, principally of interest in theories with $N=1$ or $N=2$ supersymmetry, and analytic operators, of interest in $N=2$ and $N=4$. It is argued that the Green’s functions of such operators can be determined up to constants.'
---
.2cm -1.0cm 23.0cm 16.0cm =0.25cm=0.25cm c łŁ ¶ § øØ
CERN-TH/96-159\
King’s College/kcl-th-96-13\
hepth/9606
[****]{}
1.5cm
**P.S. Howe**
CERN
Geneva, Switzerland
and
**P.C. West**
Department of Mathematics
King’s College, London
1.5cm
CERN-TH/96-159\
A central rôle in two-dimensional conformal field theories is played by operator product expansions [@bpz]. Indeed, all the properties of these theories can be encoded in their OPE’s. The OPE of two primary fields yields not only primary fields, but also descendants of primary fields. However, strong constraints on the OPE follow from demanding that it be compatible with conformal symmetry. This procedure determines the space-time dependent coeficients with which the primary fields occur up to constants of proportionality. These constants, however, can only be determined by the details of the model, in effect what null states it possess. Given these constants, the rest of the OPE, that is all the dependence on the descendants, is determined by conformal symmetry. In particular, in minimal models, the OPE’s close with only a finite number of primary fields, so that the correlation functions of such theories can in principle be calculated using the OPE and the two and three-point functions.
In four dimensions conformal field theories were studied in some detail in a general setting some time ago [@tmp], but at that time no non-trivial examples were known. However, it is now known that there are many supersymmetric gauge theories which are superconformal, certainly in perturbation theory and perhaps beyond. These theories are: the maximally supersymmetric ($N=4$) models [@sw], a class of $N=2$ [@hsw] models and certain $N=1$ theories [@pw]. It has further been conjectured that more supersymmetric theories may have non-trivial fixed points [@w]. It is therefore appropriate to reconsider four-dimensional operator product expansions in the supersymmetric context.
It has been observed that the chiral sector of superconformally invariant theories in four dimensions has certain similarities with such sectors in two dimensional superconformal theories. In particular, the chiral and dilation weights of chiral operators are related if the theory is at a fixed point [@fiz]. It has also been pointed out that the so-called analytic sectors of $N=2$ and $N=4$ theories seem to have similar properties [@hw]. Moreover, it has been argued that, although these supersymmetric theories are only invariant under a finite dimensional superconformal group, their very special form allows one to solve, non-perturbatively, for large classes of their Green’s functions. In particular, one can determine the Green’s functions in any chiral or anti-chiral sector and it is likely that one can also do this in the analytic sector [@hw]. In this paper, we give the operator product expansions in these sectors and find close similarities to the corresponding two dimensional results.
We begin by giving a discussion of operator product expansions and their conformal properties which applies in a general setting. For simplicity we shall take spacetime or superspace to be complex. Let us denote the complex, finite-dimensional (super)conformal group by $G$, for example, for four-dimensional $N$-extended supersymmetry $G=SL(4|N)$. The (super)conformal theories of interest to us are described by (super)fields which live on the (super)space $P\bsh G$ where $P$ is a parabolic subgroup of $G$. Which subgroups one should take in four dimensions can be found in reference [@hh; @hw]. We will denote the coordinates of $P\bsh G$ by $X$.
We define primary fields to be fields that transform under an induced representation of $G$. To keep life simple, we shall suppose that the fields are one-dimensional, i.e. transform under a one-dimensional subgroup of $P$. In many cases such fields are the most interesting to consider. For an infinitesimal (super)conformal transformation we have =V+ qwhere $q$ is a charge associated with the representation (related to the dilation weight of the field) and $V$ is the vector field generating the transformation on the coset space, $V(X)=\delta X{\partial\over{\partial X}}$, $\delta X$ being the change in $X$. The function $\D$ is a function that characterises the induced representation. We can choose coordinates such that the components of $V$ are polynomials of degree 2 in the components of $X$ and such that $\D$ is a polynomial of degree 1. Descendants are space-time or superspace derivatives of primary fields. These will not in general transform as induced representations. Indeed, under certain conformal transformations descendants mix into primary fields. One can also take descendants to be given by group generators acting on the primary fields, but this is equivalent to the above desciption.
Now consider a complete set of operators $\{\F_I\}$ comprising both primary fields $\{\f_i\}$ and their descendants. We shall assume that we can write an operator product expansion in the standard form, \_I(X\_1) \_J(X\_2)=\_K f\_[IJ]{}\^K(X\_1,X\_2) \_K(X\_2) . Applying an infinitesimal (super)conformal transformation to this and considering only primary fields on the left-hand side we find \_K \[(V\_1+V\_2 +q\_i\_1 +q\_j\_2)f\_[ij]{}\^K\]\_K=\_K f\_[ij]{}\^K(\_K(2)-V\_2\_K(2)) , where the subscripts 1 and 2 refer to the two points involved. Under a transformation for which $\D$ is $X$-independent, the terms proportional to the primary fields do not have any contributions from the transformations of the descendants and we get (V\_1+V\_2 +(q\_i +q\_j-q\_k))f\_[ij]{}\^k(1,2)=0 . Hence, for these transformations, the coefficients $f_{ij}^k$ behave as a two point function with a total $q$ weight of $(q_i +q_j-q_k)$. For the ordinary conformal group these transformations are translations, dilations and Lorentz rotations. However, these transformations are sufficient to determine $f_{ij}^k$ up to a constant. In particular, if the fields under consideration are Lorentz scalars and the primary fields have dilation weights $d_i(=q_i$ in this case) then f\_[ij]{}\^k=[[c\_[ij]{}\^k]{}]{} , where $c_{ij}^k$ are constants and $x_1$ and $x_2$ are the positions of the primary fields $\f_i$ and $\f_j$ in spacetime.
For the remaining transformations the function $\D$ is (super) space dependent. However, we can still examine only the primary field terms provided we take into account of transformations of descendants which result in primary fields. This calculation determines the coefficients of the lowest level descendants.
Let us illustrate the procedure for the simplest situation, i.e. the ordinary conformal group in four dimensions. The only remaining transformations are the special conformal transformations with parameter $C_{\dot\beta\beta}$ for which $\D=x^{\beta\dot\beta}C_{\dot\beta\beta}$. Under this transformation only the lowest descendants, i.e. the set of fields $\{\partial_{\alpha\dot\alpha}
\f_i\}$, transform into primary fields. Including these terms explicitly, the OPE of equation (3) becomes \_i(x\_1)\_j(x\_2)=\_k f\_[ij]{}\^k(x\_1,x\_2)\_k(x\_2) +\_k f\_[ij]{}\^[k;]{}(x\_1, x\_2) \_\_k (x\_2) +…where the dots denote contributions from higher order descendants. Applying our previous argument, again except for special conformal transformations, we recover (5) and find that f\_[ij]{}\^[k;]{} (x\_1, x\_2)=[[(x\_[12]{})\^]{}]{} c\_[ij]{}\^[k(1)]{} , where the $c_{ij}^{ k (1)}$ are constants and $x^{\alpha\dot\alpha}_{12}=
x_1^{\alpha\dot\alpha}-x_2^{\alpha\dot\alpha}$. Applying a special conformal transformation we find c\_[ij]{}\^[k(1)]{}=c\^k\_[ij]{}[[(d\_i-d\_j+d\_k)]{}]{} .
By carrying out all conformal transformations on each side of the OPE and comparing coefficients of the descendant fields we can determine all the descendant contributions in terms of the constants $c_{ij}^k$. Thus the situation is the same as in two dimensional conformal field theories.
We now apply the above procedure to chiral superfields in four-dimensional $N$-extended supersymmetry. The analysis can be adapted straightforwardly to other dimensions where chiral fields are available. Due to the chiral constraint, a chiral superfield can be viewed as a function of only $x^{\alpha \dot\alpha}$ and $\theta
^{\alpha a}$, where $x$ is an appropriate chiral variable and $a=1,\ldots N$. The operator product expansion of two chiral superfields $\f_i(X_1)$ and $\f_j(X_2)$ can be written as \_i(X\_1)\_j(X\_2) &=&\_k{f\_[ij]{}\^k (X\_1,X\_2) \_k (X\_2)+f\_[ij]{}\^[k;a ]{} (X\_1,X\_2) (\_[a]{}\_k)(X\_2)+\
& &+f\_[ij]{}\^[k,]{} (X\_1,X\_2) (\_\_k)(X\_2)+ …} , where again the dots denote contributions from higher order descendants and where we have used the shorthand notation $\partial_{\alpha \dot \alpha}={\partial \over \partial x^{\alpha
\dot \alpha}}$ and $\partial _{ \alpha a}= {\partial \over \partial \theta
^{\alpha a}}$.
We now give the superconformal transformations written in terms of the chiral variables. The vector fields which generate the translations $(P)$, dilations $(D)$ and special conformal transformations $(K)$ are: V(P)\_&=&\_\
V(D)&=&x\^\_\
V(K)\^&=& x\^x\^\_+x\^\^[a]{}\_[a]{} , and the associated $\D$’s are (P)\_&=& 0\
(D)&=& 1\
(K)\^&=& x\^ . The vector field generating internal symmetry $(I)$ transformations ($SL(N)$) is V(I)\_a\^b=\^[b]{}\_[a]{}-[1N]{}\_a\^b \^[c]{}\_[c]{} and the function $\D(I)_a^b$ vanishes in this case. For $N\neq 4$ we also have $R$-symmetry transformations generated by V(R)=\^[a]{}\_[a]{} with (R)=[2N(N-4)]{} . The $Q$-supersymmetry transformations are generated by V(Q)\_[a]{}&=& \_[a]{}\
V(Q)\_\^a &=& -\^[a]{}\_ , and the $S$-supersymmetry generators are V(S)\_a\^ &=& x\^\_[a]{}\
V(S)\^[a]{}&=& -x\^\^[a]{}\_ +\^[b]{}\^[a]{}\_[b]{} ,
and only the last of these has a non-vanishing $\D$ given by (S)\^[a]{}=-\^[a]{} . There are also Lorentz transformations which act in the obvious way on the vector and spinor coordinates.
The transformations for which $\D$ is constant can, according to our general arguments, be used to determine the superspace dependence of the coefficients of the primary chiral superfields. Translations and supersymmetry transformations imply that the coefficients are functions of $x_{12}^{\alpha\dot\alpha}\equiv x_1^{\alpha\dot\alpha}-x_2
^{\alpha\dot\alpha}$ and $\theta_{12}^{\alpha a}=\theta_1^{\alpha a}-\theta_2
^{\alpha a}$. $R$ symmetry implies that if $f_{ij}^k$ is to be non-zero it must be proportional to $\theta_{12}^{\alpha a}$ to the power $q_i+q_j-q_k$. (There are no chiral fields of interest in $N=4$ rigid supersymmetry, so this is always valid for the applications we have in mind.) Let us consider in detail the case when $q_i+q_j=q_k$. Dilation and Lorentz symmetry imply that f\_[ij]{}\^k=c\_[ij]{}\^k,f\_[ij]{}\^[k;a]{}=\^[a]{} c\_[ij]{}\^[k(2)]{} and f\_[ij]{}\^[k,]{}=(x\_[12]{})\^c\_[ij]{}\^[k(3)]{} . To fix the descendant coefficients we use special conformal transformations and special ($S$) supersymmetries. We find that the contribution given by $\f_k$ and its descendants to the OPE is \_i(X\_1)\_j(X\_2)&=& c\_[ij]{}\^k{\_k(X\_2)+ [[q\_i]{}]{}\_[12]{}\^[a]{}(\_[a]{} \_k)(X\_2) +[[q\_i]{}]{}x\_[12]{}\^(\_ \_k) (X\_2)}\
& & + [[higher order descendants]{}]{} . This result is essentially identical to the analogous result for two dimensional superconformal field theory.
We may also have contributions from primaries which are Lorentz scalars and which have $q_i+q_j-q_k=3$, for $N=1$, and $q_i+q_j-q_k=2$, for $N=2$. Such terms have leading contributions of the form f\_[ij]{}\^k=c\_[ij]{}\^k [\_[12]{}\^2x\_[12]{}\^4]{} for $N=1$, and f\_[ij]{}\^k=c\_[ij]{}\^k [\_[12]{}\^4x\_[12]{}\^4]{} for $N=2$. There are also primary fields with undotted spinor indices and internal indices. For example, in $N=1$, one can have a contribution to the OPE of Lorentz scalars $\f_i$ and $\f_j$ from a spin one-half field $\f_{k\a}$ with charge $q_k$ if $q_i+q_j-q_k={3\over2}$, and for which the leading contribution would be f\_[ij]{}\^[k]{}=c\_[ij]{}\^k[\_[12]{}\^x\_[12]{}\^2]{} However, for any pair of primary chiral fields, one always finds a finite number of primaries in the OPE on the right-hand side determined by the charges and spinorial representations involved.
We now consider harmonic superfields [@gikos]. For the theories of most interest to us, i.e. the extended rigidly supersymmetric theories, superfields of this type occur in $N=4$ Yang-Mills theory and in the $N=2$ matter sector of $N=2$ theories. To be concrete we consider the former case but the formalism can be easily adapted to $N=2$. The $N=4$ harmonic superspace of interest to us is the extension of Minkowski superspace by the internal space $\bbF=S(U(2)\times U(2))\bsh SU(4)$, and the fields we wish to consider are analytic fields on this space, that is to say, fields which are analytic with respect to the internal space $\bbF$, and which are also Grassmann analytic ($G$-analytic). The latter means that they are annihilated by half of the superspace covariant derivatives, and therefore depend on only half of the odd coordinates, in a similar fashion to chiral fields. The difference is that the derivatives involve the coordinates of the internal space and this allows one to use a mixture of dotted and undotted spinor derivatives. These fields can be defined on a new superspace, analytic superspace, which is similar to chiral superspace. It has local coordinates $$X=\{x^{\alpha\dot\alpha},\ \lambda^{\alpha a^\prime},\
\pi^{a\dot\alpha},
\ y^{aa^\prime}\}$$ where $a$ and $a'$ can both take on two values. (Locally, the internal space is just like ordinary complex Minkowski space).
The operator product expansion for two analytic fields takes the form \_i(X\_1)\_j(X\_2)&=&\_k{f\_[ij]{}\^k (X\_1, X\_2) \_k(X\_2) +f\_[ij]{}\^[k,]{} (X\_1, X\_2) (\_ \_k)(X\_2)\
&&+f\_[ij]{}\^[k,a]{} (X\_1, X\_2) (\_[a]{}\_k)(X\_2) +f\_[ij]{}\^[k,a]{} (X\_1, X\_2) (\_[a’]{} \_k)(X\_2)\
&&+f\_[ij]{}\^[k,aa]{} (X\_1, X\_2) (\_[aa’]{}\_k) (X\_2)+ [[higher order descendants]{}]{}} .
The superconformal transformations, when written in analytic coordinates, take a particularly simple form. The vector fields which generate them are, for translations, dilations, Lorentz transformations ($M$) and special conformal transformations, V(P)\_ &=& \_\
V(D) &=& x\^\_ +[12]{}ł\^[a’]{}\_[a’]{}+[12]{} \^[a]{}\_[a]{}\
V(M)\_\^ &=&(x\^\_+ł\^[c’]{}\_[c’]{})-[trace]{}\
V(M)\_\^ &=&(x\^[c]{}\_[c]{}+\^[c]{}\_[c]{})-[trace]{}\
V(K)\^ &=& x\^ x\^\_+x\^ł\^[b’]{}\_[b’]{}+\^[b]{} x\^\_[b]{}+\^[b]{}ł\^[b’]{}\_[b b’]{} . For internal symmetry transformations they are V(I)\_[a a’]{} &=&\_[a a’]{}\
V(I) &=&y\^[a a’]{}\_[aa’]{}+[12]{}ł\^[a’]{}\_[a’]{} +[12]{}\^[a]{}\_[a]{}\
V(I)\_a\^b &=&(\^[b]{}\_[a]{}+y\^[bc’]{}\_[ac’]{})-[trace]{}\
V(I)\_[a’]{}\^[b’]{} &=&(ł\^[b’]{}\_[a’]{}+y\^[cb’]{}\_[ca’]{})-[trace]{}\
V(I)\^[aa’]{} &=&y\^[ab’]{}y\^[ba’]{}\_[bb’]{}+ł\^[a’]{}y\^[a b’]{}\_[b’]{}+y\^[b a’]{}\^[a]{}\_[b]{}+ł\^[a’]{}\^[a]{}\_ . For $Q$-supersymmetry transformations we have V(Q)\_[a’]{} &=& \_[a’]{}\
V(Q)\_[a]{} &=&\_[a]{}\
V(Q)\_\^[a]{} &=&y\^[a b’]{}\_[b’]{}+\^[a]{}\_\
V(Q)\_\^[a’]{}&=&y\^[ba’]{}\_[b]{}-ł\^[a’]{}\_ , while for $S$-supersymmetry we have V(S)\_a\^ &=& x \^\_[a ]{} +\^[b’]{}\_[a b’]{}\
V(S)\_[a’]{}\^ &=& x \^\_[a\^]{} - \^[b ]{}\_[b a\^]{}\
V(S)\^[a]{} &=& x\^ y\^[a b’]{}\_[b’ ]{} - \^[b]{} y\^[a b\^]{}\_[b b\^]{} + x \^\^[a]{}\_ - \^[b ]{} \^[a]{}\_[b ]{}\
V(S)\^[a’]{} & = & y\^[b a \^]{} x\^\_[b]{} + y\^[b a \^]{} \^[b\^]{}\_[b b\^]{} - \^[a\^]{} x\^\_ -\^[a\^]{} \^[b\^]{}\_[b\^]{} . The non-zero $\D$’s are (K)\^ &=&x\^\
(I) &=& -1\
(I)\^[a a’]{} &=&-y\^[a a’]{}\
(S)\^[a]{} &=&\^[a]{}\
(S)\^[a’]{} &=&-ł\^[a’]{} . The transformation for an analytic field with charge $q$ takes the form given in equation (1); the charge is the charge of the field with respect to the internal $U(1)$, i.e. the $U(1)$ of the isoptropy group of the internal space $\bbF$. In harmonic superspace it would correspond to a field satisfying $D_o\f=q\f$ where $D_o$ is the derivative on $SU(4)$ corresponding to this $U(1)$. In the case of $N=2$ there is also an $R$-symmetry transformation generated by V(R)=ł\^\_-\^\_ . (In $N=2$ analytic space the internal indices $a$ and $a'$ only take one value and so can be dropped.)
We shall now analyse the OPE for analytic fields using the method outlined above. One finds again that the primary coefficients $\{f_{ij}^k\}$ must obey the same equations as a two-point function with total charge ${1\over2}(q_i+q_j-q_k)$, i.e., (V\_1 + V\_2 +[12]{}(q\_i+q\_j-q\_k)(\_1 + \_2))f\_[ij]{}\^k=0 . Now the basic two-point function in $N=4$ is the one for an Abelian Yang-Mills field strength tensor $W$ which has charge $q=1$. It is given by :=g\_[12]{} where y\^[a a\^]{}= y\^[a a\^]{} +2[ \^[a\^]{}\^[a ]{} x\_x\^2]{} . One then has f\_[ij]{}\^k=c\_[ij]{}\^k (g\_[12]{})\^[[12]{}(q\_i+q\_j-q\_k)]{} . for some constants $\{c_{ij}^k\}$.
We can determine the coefficients for the descendant fields in the same way as before and so we arrive at the result \_i(X\_1)\_j(X\_2)&=&\_[k=0]{}c\_[ij]{}\^k(g\_[12]{})\^[[12]{}(q\_i+q\_j-q\_k)]{} {\_k(X\_2)+[12]{}(q\_i-q\_j+q\_k)\
& &(x\_[12]{}\^\_+ł\_[12]{}\^[a’]{} \_[a’]{} +\_[12]{}\^[a]{}\_[a]{}+y\^[a a’]{}\_[12]{}\_[ a a’]{})\_k(X\_2)} +… .
In an $N=4$ Yang-Mills theory with gauge group $SU(M)$, for example, the basic local analytic operators are given by the gauge-invariant powers of the field, i.e., they are the operators A\_m:=[tr]{} (W)\^m, m=2,…(M-1) . The operator $A_m$ has charge $q=m$. In particular, $A_2$ is the supercurrent which we shall denote by $T$. Its components include the energy-momentum tensor, the spacetime supersymmetry currents and the currents corresponding to the internal $SU(4)$ symmetry group. The OPE for two $T$’s is T(1)T(2)&=&c\_o(g\_[12]{})\^2 +c\_2g\_[12]{}{T(2)+(x\^\_+ł\^[a’]{}\_[a’]{} +\^[a]{}\_[a]{}+y\^[aa’]{}\_[aa’]{})T(2)}\
&&+ [finite terms]{} . For most four-dimensional theories the OPE of the energy-momentum tensor with itself does not close on itself. However, for $N=4$ Yang-Mills it does, and the result is strikingly similar to the two-dimensional case; indeed, we can rescale $T$ such that $c_2=1$ in which case it would be tempting to interpret $c_o$ as a the central charge. We remark that it is only in $N=4$ that this can happen because in $N=1$ and $N=2$ the supercurrent is neither chiral nor analytic, and we believe that it is only these special types of superfields which have such simple OPE’s. For a discussion of the OPE in $N=1$ supersymmetric theories we refer the reader to [@gj].
There may be operators other than the $\{A_m\}$ and their descendants appearing in the analytic OPE. For example, given a gauge-invariant scalar superfield on super Minkowski space one can always construct a gauge-invariant analytic field on harmonic superspace by applying enough spinorial derivatives. A similar sort of situation can arise with chiral fields in $N=1$. If $\f$ is chiral then so is $\bar D^2\bar\f$. However, the latter is not in general primary unless $\f$ has weight ${1\over3}$. Clearly, analytic operators obtained in this way will only be able contribute to the analytic OPE if they are primary. The lowest-dimensional analytic operator of this type that one can construct in $N=4$ has naïve dimension 6 so that, even if it is primary, it cannot contribute to the OPE of two supercurrents.
We conclude with a consequence of the OPE for analytic fields in either $N=2$ or $N=4$. From a formal point of view the spacetime coordinate $x$ and the internal coordinate $y$ appear in a very symmetrical manner. Indeed, as we have remarked earlier, in $N=4$ the internal space is locally the same as (complex) Minkowski space. However, from a physical point of view spacetime and the internal space are completely different. In particular, the singularities which appear in the OPE as $x_1$ approaches $x_2$ are due to the usual difficulties encountered in defining local products of operators in quantum field theory. On the other hand, the rôle of $y$ is simply to act as a device to help us exploit the internal symmetries of the theory. The internal space is compact, and no internal points need or should be removed from the domain of definition of Green’s functions of many operators. Therefore singularities in the internal variables are completely spurious and must cancel. One way of seeing this is to note that any analytic operator can be reexpressed in terms of a polynomial in $y$ with coefficients which are fields on ordinary super Minkowski space. If one examines the right-hand side of the analytic OPE above, one sees that the absence of singularities in $y$ requires that ${1\over2}(q_i+q_j-q_k)$ be an integer, and furthermore that there can only be a finite number of primary fields occurring because otherwise one will introduce poles in $y$ for sufficiently large values of $q_k$. Thus the situation is similar in some respects to that obtaining in two-dimensional minimal models. Given that the analytic OPE is valid, and that analyticity imposes finiteness of the number of primaries occurring in any given OPE, it is tempting to conclude that any Green’s function of analytic operators can in principle be computed knowing the three-point functions and the OPE. Any such Green’s function depends only on a few arbitrary constants, i.e. the $\{c_{ij}^k\}'s$ and the constants in the three-point functions. In other words, the analytic OPE for $N=4$ (and for $N=2$) suggests that this sector of these theories is solvable in the full quantum theory. We note, however, that this result depends on some assumptions, principally the form of the OPE for analytic fields and the assumption that analyticity is maintained in the quantum theory. The latter seems to be natural given that the theories we are interested in are superconformal. In a future paper [@hw2] we shall give a more detailed discussion of the Green’s functions using analyticity and superconformal invariants.
[99]{} A.A. Belavin, A.M. Polyakov and A.B. Zamolodchikov, Nucl. Phys. B241 (1984) 333. I. Todorov, M. Mintchev and V. Petkova, [*Conformal Invariance in Quantum Field Theory*]{}, Academia Nazionale dei Lincei, Scuola Normale Superiore (1979) and references therein. M. Sohnius and P. West Phys. Lett. B100 (1981) 45; S.Mandelstam, Nucl. Phys. B213 (1983) 149; P.S. Howe, K.S. Stelle and P.K. Townsend, Nucl. Phys. B214 (1983) 519; Nucl. Phys. B236 (1984) 125. L. Brink, O. Lindgren and B. Nilsson, Nucl. Phys. B212 (1983) 401. P. Howe. K. Stelle and P. West, Phys. Lett 124B (1983) 55. A. Parkes and P. West, Phys. Lett. 138B (1984) 99; D.I. Kazakov, Phys. Lett B179 (1986) 352, Mod. Phys. lett. A2 (1987) 663; O. Piguet and K. Sibold, Helv. Phys. Acta 63 (1990) 71. P. Argyres, M. Plesser, N. Seiberg and E. Witten, Nucl. Phys. B461 (1996) 71. B. Conlong and P. West, J. Phys. A26 (1993) 3325. P. Howe and P. West, to be published in Int. J. Mod. Phys. P. Howe and G. Hartwell, Class. Quant. Grav. 12 (1995) 1823. A. Galperin, E. Ivanov, S. Kalitzin, V. Ogievetsky and E. Sokatchev, Class. Quant. Grav.1 (1984) 469, Class. Quant. Grav. 2 (1985) 155. D. Anselmi, M. Grisaru and A. Johansen, hep-th/9601023. P. Howe and P. West, in preparation.
|
---
abstract: |
Let $G=(V,E)$ be a graph and $p$ be a positive integer. A subset $S\subseteq V$ is called a $p$-dominating set if each vertex not in $S$ has at least $p$ neighbors in $S$. The $p$-domination number $\g_p(G)$ is the size of a smallest $p$-dominating set of $G$. The $p$-reinforcement number $r_p(G)$ is the smallest number of edges whose addition to $G$ results in a graph $G'$ with $\g_p(G')<
\g_p(G)$. In this paper, we give an original study on the $p$-reinforcement, determine $r_p(G)$ for some graphs such as paths, cycles and complete $t$-partite graphs, and establish some upper bounds of $r_p(G)$. In particular, we show that the decision problem on $r_p(G)$ is NP-hard for a general graph $G$ and a fixed integer $p\geq 2$.
author:
- |
[ You Lu$^a$ Fu-Tao Hu$^b$]{} Jun-Ming Xu$^b$[^1]\
[$^a$Department of Applied Mathematics,]{}\
[Northwestern Polytechnical University,]{}\
[Xi’an Shanxi 710072, P. R. China]{}\
[Email: luyou@nwpu.edu.cn]{}\
\
[$^b$Department of Mathematics,]{}\
[University of Science and Technology of China,]{}\
[Wentsun Wu Key Laboratory of CAS,]{}\
[Hefei, Anhui, 230026, P. R. China]{}\
[Email: hufu@mail.ustc.edu.cn; xujm@ustc.edu.cn]{}\
title: 'On the $p$-reinforcement and the complexity[^2]'
---
[**Keywords:**]{} domination, $p$-domination, $p$-reinforcement, NP-hard\
\
[**AMS Subject Classification (2000):**]{} 05C69
Induction
=========
For notation and graph-theoretical terminology not defined here we follow [@x03]. Specifically, let $G=(V,E)$ be an undirected graph without loops and multi-edges, where $V=V(G)$ is the vertex-set and $E=E(G)$ is the edge-set, where $E\ne\emptyset$.
For $x\in V$, the [*open neighborhood*]{}, the [*closed neighborhood*]{} and the [*degree*]{} of $x$ are denoted by $N_{G}(x)=\{y\in V : xy\in E\}$, $N_{G}[x]=N_{G}(x)\cup \{x\}$ and $deg_G(x)=|N_G(x)|$, respectively. $\delta(G)=\min\{deg_G(x): x\in V\}$ and $\Delta(G)=\max\{deg_G(x): x\in V\}$ are the minimum degree and the maximum degree of $G$, respectively. For any $X\subseteq V$, let $N_G[X]=\cup_{x\in X}N_G[x]$.
For a subset $D\subseteq V$, let $\overline D=V\setminus D$. The notation $G^c$ denotes the complement of $G$, that is , $G^c$ is the graph with vertex-set $V(G)$ and edge-set $\{xy:\ xy\notin E(G)\
{\rm for\ any}\ x,y\in V(G) \}$. For $B\subseteq E(G^c)$, we use $G+B$ to denote the graph with vertex-set $V$ and edge-set $E\cup
B$. For convenience, we denote $G+\{xy\}$ by $G+xy$ for an $xy\in
E(G^c)$.
A nonempty subset $D\subseteq V$ is called a [*dominating set*]{} of $G$ if $|N_G(x)\cap D|\geq 1$ for each $x\in \overline D$. The [*domination number*]{} $\gamma (G)$ of $G$ is the minimum cardinality of all dominating sets in $G$. The domination is a classical concept in graph theory. The early literature on the domination with related topics is, in detail, surveyed in the two books by Haynes, Hedetniemi, and Slater [@hs981; @hs982].
In 1985, Fink and Jacobson [@fj85] introduced the concept of a generalization domination in a graph. Let $p$ be a positive integer. A subset $D\subseteq V$ is a [*$p$-dominating set*]{} of $G$ if $|N_G(x)\cap D|\geq p$ for each $x\in \overline D$. The [*$p$-domination number*]{} $\gamma_p(G)$ is the minimum cardinality of all $p$-dominating sets in $G$. A $p$-dominating set with cardinality $\gamma_p(G)$ is called a $\gamma_p$-set of $G$. For $S,
T\subseteq V$, the set $S$ can $p$-dominate $T$ in $G$ if $|N_G(x)\cap S|\geq p$ for every $x\in T\setminus S$. Clearly, the $1$-dominating set is the classical dominating set, and so $\gamma_1(G)=\gamma(G)$. The $p$-domination is investigated by many authors (see, for example, [@bcf05; @bcv06; @cfhv11; @cr90; @f85]). Very recently, Chellali [*et al.*]{}[@cfhv11] have given an excellent survey on this topics. The following are two simple observations.
\[obs1.1\] If $G$ is a graph with $|V(G)|\geq p$, then $\g_p(G)\geq p$.
\[obs1.2\] Every $p$-dominating set of a graph contains all vertices of degree at most $p-1$.
Clearly, addition of some extra edges to a graph could result in decrease of its domination number. In 1990, Kok and Mynhardt [@km90] first investigated this problem and proposed the concept of the reinforcement number. The [*reinforcement number*]{} $r(G)$ of a graph $G$ is defined as the smallest number of edges whose addition to $G$ results in a graph $G'$ with $\g(G')<\g(G)$. By convention $r(G)=0$ if $\g(G)=1$.
The reinforcement number has received much research attention (see, for example, [@bgh08; @dhtv98; @hwx09]), and its many variations have also been well described and studied in graph theory, including total reinforcement [@hrr11; @ses07], independence reinforcement [@zls03], fractional reinforcement [@csm03; @dl97] and so on. In particular, Blair [*et al.*]{} [@bgh08], Hu and Xu [@hx10], independently, showed that the problem determining $r(G)$ for a general graph $G$ is NP-hard.
Motivated by the work of Kok and Mynhardt [@km90], in this paper, we introduce the $p$-reinforcement number, which is a natural extension of the reinforcement number. The [*$p$-reinforcement number*]{} $r_p(G)$ of a graph $G$ is the smallest number of edges of $G^c$ that have to be added to $G$ in order to reduce $\gamma_p(G)$, that is $$r_p(G)=\min\{|B|: B\subseteq E(G^c)\ {\rm with}\ \g_p(G+B)< \g_p(G)\}.$$
It is clear that $r_1(G)=r(G)$. By Observation \[obs1.1\], we can also make a convention, $r_p(G)=0$ if $\g_p(G)\leq p$. Thus $r_p(G)$ is well-defined for any graph $G$ and integer $p\geq 1$. In this paper, we always assume $\g_p(G)> p$ when we consider the $p$-reinforcement number for a graph $G$.
The rest of this paper is organized as follows. In Section 2 we present an equivalent parameter for calculating the $p$-reinforcement number of a graph. As its applications, we determine the values of the $p$-reinforcement numbers for special classes of graphs such as paths, cycles and complete $t$-partite graphs in Sections 3, and show that the decision problem on $p$-reinforcement is NP-hard for a general graph and a fixed integer $p\geq 2$ in Section 4. Finally, we establish some upper bounds for the $p$-reinforcement number of a graph $G$ by terms of other parameters of $G$ in Section 5.
Preliminary
===========
Let $G$ be a graph with $\g(G)> 1$ and $B\subseteq E(G^c)$ with $|B|=r(G)$ such that $\g(G+B)<\g(G)$. Let $X$ be a $\g$-set of $G+B$. Then $|B|\geq |V(G)\setminus N_G[X]|$. On the other hand, given any set $X\subseteq V(G)$, we can always choose a subset $B\subseteq E(G^c)$ with $|B|=|V(G)\setminus N_G[X]|$ such that $X$ dominates $G+B$. It is a simple observation that, to calculate $r(G)$, Kok and Mynhardt [@km90] proposed the following parameter $$\label{e2.1}
\eta(G)=\min\{|V(G)\setminus N_G[X]|:\ X\subseteq V(G), |X|<\g(G)\},$$ and showed $r(G)=\eta(G)$. We can refine this technique to deal with the $p$-reinforcement number $r_p(G)$.
Let $G$ be a graph with $\g_p(G)> p$. For any $X\subseteq
V(G)$, let $$\label{e2.2}
X^*=\{x\in \overline X: |N_G(x)\cap X|< p\}.$$ Let $B\subseteq E(G^c)$ with $|B|=r_p(G)$ such that $\g_p(G+B)<\g_p(G)$, and let $X$ be a $\g_p$-set of $G+B$. Then $$|B|\geq \sum_{x\in X^*}(p-|N_G(x)\cap X|).$$ On the other hand, given any set $X\subseteq V(G)$ with $|X|\geq p$, we can always choose a subset $B\subseteq E(G^c)$ with $$|B|=\sum_{x\in X^*}(p-|N_G(x)\cap X|)$$ such that $X$ can $p$-dominate $G+B$. Motivated by this observation, we introduce the following notations. For a subset $X\subseteq V(G)$, $$\begin{aligned}
\eta_p(x,X,G)&=& \left\{ \begin{array}{ll}
p-|N_G(x)\cap X| & \mbox{if }x\in X^*\\
0 & \mbox{otherwise}
\end{array}
\right. \mbox{ for $x\in V(G)$,} \label{e2.3}\\
\eta_p(S, X, G)&=&\sum_{x\in S}\eta_p(x,X,G) \mbox{\ \ for $S\subseteq V(G)$, and } \label{e2.4} \\
\eta_p(G)&=&\min\{\eta_p(V(G),X,G) : |X|<\g_p(G)\} \label{e2.5}.
\end{aligned}$$
A subset $X\subseteq V(G)$ is called an *$\eta_p$-set* of $G$ if $\eta_p(G)=\eta_p(V(G),X,G)$. Clearly, for any two subsets $S', S\subseteq V(G)$ and two subsets $X', X\subseteq V(G)$, $$\begin{array}{ll}
\eta_p(S', X, G)\leq \eta_p(S, X, G) & {\rm if}\ S'\subseteq S,\\
\eta_p(S, X, G)\leq \eta_p(S, X', G) & {\rm if}\ |X'|\leq |X|.
\end{array}$$ Thus, we have the following simple observation.
\[obs2.1\] If $X$ is an $\eta_p$-set of a graph $G$, then $|X|=\g_p(G)-1$.
The following result shows that computing $r_p(G)$ can be referred to computing $\eta_p(G)$ for a graph $G$ with $\g_p(G)\geq p+1$.
\[thm2.2\] For any graph $G$ and positive integer $p$, $r_p(G)=\eta_p(G)$ if $\g_p(G)> p$.
Let $X$ be an $\eta_p$-set of $G$. Then $|X|=\g_p(G)-1$ by Observation \[obs2.1\]. Let $Y=\{y\in V(G): \eta_p(y,X,G)>0\}$. Then $Y=X^*$ is contained in $\overline X$, where $X^*$ is defined in (\[e2.2\]). Thus, $\eta_p(G)=\eta_p(X^*,X,G)$. We construct a new graph $G'$ from $G$, for each $y\in X^*$, by adding $\eta_p(y,X,G)$ edges of $G^c$ to $G$ joining $y$ to $\eta_p(y,X,G)$ vertices in $X$. Clearly, $X$ is a $p$-dominating set of $G'$, that is, $\g_p(G')\leq |X|$. Let $B=E(G')-E(G)$. Then $$\g_p(G)=|X|+1>|X|\geq \g_p(G')=\g_p(G+B),$$ which implies $r_p(G)\leq |B|$. It follows that $$\label{e2.6}
r_p(G)\leq |B|=\sum_{y\in X^*}\eta_p(y,X,G)=\eta_p(X^*,X,G)=\eta_p(G).$$
On the other hand, let $B$ be a subset of $E(G^c)$ such that $|B|=r_p(G)$ and $\g_p(G+B)=\g_p(G)-1$. Let $G'=G+B$ and $X'$ be a $\g_p$-set of $G'$. For every $xy\in B$, $X'$ cannot $p$-dominate the graph $G'-xy$ by the minimality of $B$. This fact means that only one of $x$ and $y$ is in $X'$. Without loss of generality, assume $y\in \overline {X'}$. Since $X'$ cannot $p$-dominate $y$ in $G'-xy$ and so in $G$, $|N_G(y)\cap X'|<p$. Let $Z$ be all end-vertices of edges in $B$ and $Y=\overline{X'}\cap Z$. Since $X'$ is a $\g_p$-set of $G'$, $|N_{G'}(u)\cap X'|\geq p$ for any $u\in \overline{X'}$. In other words, any $u\in \overline{X'}$ with $|N_{G}(u)\cap X'|<p$ must be in $Y$. It follows that $$\label{e2.7}
\sum_{u\in \overline{X'}}\eta_p(u,X',G)=\sum_{y\in Y}(p-|N_G(y)\cap X'|)=|B|.$$ By (\[e2.7\]), we immediately have that $$\eta_p(G)\leq \eta_p(V(G),X',G)=\sum_{u\in \overline{X'}}\eta_p(u,X',G)=|B|=r_p(G).$$Combining this with (\[e2.6\]), we obtain $r_p(G)=\eta_p(G)$, and so the theorem follows.
Note that when $p=1$, $X^*$ defined in (\[e2.2\]) is $V(G)\setminus N_G[X]$. This fact means that $\eta(G)$ defined in (\[e2.1\]) is a special case of $p=1$ in (\[e2.5\]), that is, $\eta_1(G)=\eta(G)$. Thus, the following corollary holds immediately.
\[cor2.1\][(Kok and Mynhardt [@km90])]{.nodecor} $r(G)=\eta(G)$ if $\g(G)>1$.
Using Observation \[obs1.2\] and Theorem \[thm2.2\], the following corollary is obvious.
\[cor2.2\] Let $p\geq 1$ be an integer and $G$ be a graph with $\g_p(G)> p$. If $\Delta(G)< p$, then $$r_p(G)=p-\Delta(G).$$
Some Exact Values
=================
In this section we will use Theorem \[thm2.2\] to calculate the $p$-reinforcement numbers for some classes of graphs.
We first determine the $p$-reinforcement numbers for paths and cycles. Let $P_n$ and $C_n$ denote, respectively, a path and a cycle with $n$ vertices. When $p=1$, Kok and Mynhardt [@km90] proved that $r(P_n)=r(C_n)=i$ if $n=3k+i \geq 4$, where $i\in \{1,2,3\}$. We will give the exact values of $r_p(P_n)$ and $r_p(C_n)$ for $p\geq 2$. The following observation is simple but useful.
\[obs3.1\] For integer $p\geq 2$, $$\g_p(P_n)=\left\{
\begin{array}{rl}
\lfloor\frac{n}{2}\rfloor+1 & \mbox{\ \ if\ \ $p=2$}\\
n & \mbox{\ \ if\ \ $p\geq 3$}
\end{array}
\right.
\mbox{ and\ \ }
\g_p(C_n)=\left\{
\begin{array}{rl}
\lceil\frac{n}{2}\rceil & \mbox{\ \ if\ \ $p=2$}\\
n & \mbox{\ \ if\ \ $p\geq 3$.}
\end{array}
\right.$$
\[thm3.2\] Let $p\geq 2$ be an integer. If $\g_p(P_n)>p$ then $$r_p(P_n)=\left\{
\begin{array}{ll}
2 & \mbox{\ \ if\ \ $p=2$ and $n$ is odd}\\
1 & \mbox{\ \ if\ \ $p=2$ and $n$ is even}\\
p-2 & \mbox{\ \ if\ \ $p\geq 3$}.
\end{array}
\right.$$
Let $P_n=x_1x_2\cdots x_n$ and $X$ be an $\eta_p$-set of $P_n$. By Theorem \[thm2.2\] and $\g_p(P_n)> p$, $r_p(P_n)=\eta_p(P_n)=\eta_p(V(P_n),X,P_n)\geq 1$. For $p\geq 3$, it is easy to see that $r_p(P_n)=p-2$ by Corollary \[cor2.2\]. Assume that $p=2$ below.
If $n$ is even, then by Observation \[obs3.1\], $\g_2(P_n)-\g_2(C_n)=1$, which implies that $r_2(P_n)\leq 1$. Furthermore, $r_2(P_n)=1$.
If $n$ is odd, then $\g_2(P_n)=\frac{n+1}{2}$ by Observation \[obs3.1\], and so $n\geq 5$ since $\g_2(P_n)>2$. Let $$X'=\bigcup_{i=1}^{\frac{n-1}{2}}\{x_{2i}\}.$$ Clearly, $|X'|=\frac{n-1}{2}=\g_2(P_n)-1$. So $$\begin{aligned}
\eta_2(V(P_n),X,P_n)\leq \eta_2(V(P_n),X',P_n)
= \eta_2(x_1, X',P_n)+\eta_2(x_n,X',P_n)
= 2.\end{aligned}$$
Suppose that $\eta_2(V(P_n),X,P_n)=1$. Then $X$ can $2$-dominate either $V(P_n)\setminus \{x_1\}$ or $V(P_n)\setminus \{x_{n}\}$. In both cases, we have $$|X|\geq \g_2(P_{n-1})=\left\lfloor\frac{n-1}{2}\right\rfloor+1=\frac{n-1}{2}+1,$$ which contradicts with $|X|=\frac{n-1}{2}$. Hence $r_2(P_n)=\eta_2(V(P_n),X,P_n)=2$.
\[3.3\] Let $p\geq 2$ be an integer. If $\g_p(C_n)>p$ then $$r_p(C_n)=\left\{
\begin{array}{ll}
2 & \mbox{\ \ if\ \ $p=2$ and $n$ is odd}\\
4 & \mbox{\ \ if\ \ $p=2$ and $n$ is even}\\
p-2 & \mbox{\ \ if\ \ $p\geq 3$}.
\end{array}
\right.$$
Let $C_n=x_1x_2\cdots x_nx_1$. If $p\geq 3$ then the result holds obviously by Corollary \[cor2.2\]. In the following, we only need to calculate the values of $r_p(C_n)$ for $p=2$. Let $X$ be an $\eta_2$-set of $C_n$. Then $r_2(C_n)=\eta_2(C_n)=\eta_2(V(C_n),X,C_n)$ by Theorem \[thm2.2\]. Note that $n\geq 5$ since $\g_2(C_n)=\lceil\frac{n}{2}\rceil>2$.
If $n$ is odd, then let $$X'=\bigcup_{i=1}^{\frac{n-1}{2}}\{x_{2i-1}\}.$$ Clearly, $|X'|=\frac{n-1}{2}=\g_2(C_n)-1$ by Observation \[obs3.1\], and $\eta_2(V(C_n),X',C_n)=\eta_2(x_{n-1},X',C_n)+\eta_2(x_n,X',C_n)=2$. So $$r_2(C_n)=\eta_2(V(C_n),X,C_n)\leq \eta_2(V(C_n),X',C_n)=2.$$ Since $X$ is not a $2$-dominating set of $C_n$, there must be two adjacent vertices, denoted by $x_i$ and $x_{i+1}$, of $C_n$ not in $X$. This fact means that $\eta_2(x_i,X,C_n)\geq 1$ and $\eta_2(x_{i+1},X,C_n)\geq 1$. So $$r_2(C_n)=\eta_2(V(C_n),X,P_n)\geq \eta_2(x_i,X,C_n)+\eta_2(x_{i+1},X,C_n)\geq 2.$$ Hence $r_2(C_n)=2$.
If $n$ is even, then $n\geq 6$. Deleting $X$ and all vertices $2$-dominated by $X$ from $C_n$, we can obtain a result graph, denoted by $H$, each of whose components is a path with length at least 2. Denote all components of $H$ by $H_1,\cdots, H_h$, where $h\geq 1$. In the case that $h=1$ and the length of $H_1$ is equal to one, $X$ can $2$-dominate a subgraph of $C_n$ that is isomorphic to $P_{n-2}$. By Observation \[obs3.1\], $$|X|\geq \g_2(P_{n-2})=\lfloor\frac{n-2}{2}\rfloor+1=\frac{n}{2},$$ which contradicts that $|X|=\g_2(C_n)-1=\lceil\frac{n}{2}\rceil-1=\frac{n}{2}-1$. In other cases, we can find that $$r_2(C_n)=\eta_2(V(C_n),X,C_n)\geq 4.$$ Let $$X''=\bigcup_{i=1}^{\frac{n}{2}-1}\{x_{2i-1}\}.$$ It is easy to check that $|X''|=\frac{n}{2}-1=\g_2(C_n)-1$ and $\eta_2(V(C_n),X'',C_n)=4$. So $$r_2(C_n)=\eta_2(V(C_n),X,C_n)\leq\eta_2(V(C_n),X'',C_n)=4.$$ Hence $r_2(C_n)=4$ and so the theorem is true.
Next we consider the $p$-reinforcement number for a complete $t$-partite graph $K_{n_1,\cdots,n_t}$. To state our results, we need some symbols. For any subset $X=\{n_{i_1,\ \cdots,\ n_{i_r}}\}$ of $\{n_1, \cdots, n_t\}$, define $$|X|=r \mbox{\ \ and\ \ } f(X)=\sum_{j=1}^rn_{i_j}.$$ For convenience, let $|X|=0$ and $f(X)=0$ if $X=\emptyset$. let $$\mathscr{X}=\{X : X \mbox{ is a subset of } \{n_1, \cdots, n_t\}
\mbox{ with $f(X)\geq \g_p(G)$}\}$$ and, for every $X\in \mathscr{X}$, define $$f^*(X)=\max\{f(Y) : Y \mbox{ is a subset of $X$ with $|Y|=|X|-1$ and
$f(Y)< p$}\}.$$
\[thm3.4\] For any integer $p\geq 1$ and a complete $t$-partite graph $G=K_{n_1, \cdots, n_t}$ with $t\geq 2$ and $\g_p(G)> p$, $$r_p(G)=\min\{(p-f^*(X))(f(X)-\g_p(G)+1) : X\in \mathscr{X}\}.$$
Let $N=\{n_1,\cdots,n_t\}$ and $V(G)=V_1\cup \cdots \cup V_t$ be the vertex-set of $G$ such that $|V_i|=n_i$ for each $i=1,\cdots,t$. Let $$m=\min\{(p-f^*(X))(f(X)-\g_p(G)+1) : X\in \mathscr{X}\}.$$
We first prove that $r_p(G)\leq m$. Let $X\subseteq \mathscr{X}$ (without loss of generality, assume $X=\{n_1,\cdots,n_k,n_{k+1}\}$ for some $0\leq k\leq t-1$) such that $$\begin{aligned}
f^*(X)=n_1+\cdots+n_k \mbox{ and } (p-f^*(X))(f(X)-\g_p(G)+1)=m.\end{aligned}$$ By $X\subseteq \mathscr{X}$, we know that $n_{k+1}=f(X)-f^*(X)\geq\g_p(G)-f^*(X)$. So we can pick a vertex-subset $V_{k+1}'$ from $V_{k+1}$ such that $|V_{k+1}'|=\g_p(G)-f^*(X)-1$. Let $$D=V_1\cup \cdots \cup V_k\cup V_{k+1}'.$$ Clearly, $|D|=\g_p(G)-1.$ Since $\g_p(G)> p$, $|D|\geq p$ and so $D$ can $p$-dominate $\cup_{i=k+2}^tV_i$. Hence by the definition of $\eta_p(V(G),D,G)$, $$\begin{aligned}
\eta_p(V(G),D,G)&=&\eta_p(V(G)\setminus D, D,G)\\
&=&\sum_{v\in V_{k+1}\setminus V_{k+1}'}\eta_p(v,D,G)+\sum_{i=k+2}^t\eta_p(V_i,D,G)\\
&=& |V_{k+1}\setminus V_{k+1}'|(p-f^*(X))+0\\
&=&(p-f^*(X))[n_{k+1}-(\g_p(G)-f^*(X)-1)]\\
&=&(p-f^*(X))(f(X)-\g_p(G)+1)\\
&=& m.\end{aligned}$$ By Theorem \[thm2.2\], we have $r_p(G)=\eta_p(G)\leq \eta_p(V(G),D,G)=m.$
On the other hand, we will show that $r_p(G)\geq m$. For any subset $M$ of $N$, we use $I(M)$ to denote the subindex-sets of all elements in $M$, that is, $$I(M)=\{i : n_i\in M\}.$$
Let $S$ be an $\eta_p$-set of $G$ and let $$\begin{aligned}
&& Y=\{n_i : \hspace{0.7cm}|V_i\cap S|=|V_i| \mbox{ for $1 \leq i\leq t$}\}, \mbox{ and}\\
&& A=\{n_i : 0< |V_i\cap S|<|V_i| \mbox{ for $1 \leq i\leq t$}\}.\end{aligned}$$ Thus $$\label{e5}
f(Y\cup A)=f(Y)+f(A)=\sum_{i\in I(Y)}|V_i|+\sum_{i\in I(A)}|V_i|\\
\geq |S|=\g_p(G)-1$$ by Observation \[obs2.1\]. Since $\cup_{i\in I(Y)}V_i\ (\subseteq
S)$ cannot $p$-dominate $G$, $$\label{e6}
f(Y)=\sum_{i\in I(Y)}n_i=|\cup_{i\in I(Y)}V_i|< p.$$ Hence, by (\[e5\]) and $\g_p(G)> p$, $$f(A)\geq \g_p(G)-1-f(Y)> \g_p(G)-p-1\geq 0,$$ which implies that $|A|\geq 1$.
**Claim.** $|A|=1$.
**Proof of Claim.** Suppose that $|A|\geq 2$. Then we can choose $i$ and $j$ from $I(A)$ such that $i\neq j$. By the definition of $A$, we have $0< |V_i\cap S|< |V_i|$ and $0< |V_j\cap
S|< |V_j|$. Therefore, we can pick two vertices $x$ and $y$ from $V_i\cap S$ and $V_j\setminus S$, respectively. Let $$S'=(S\setminus \{x\})\cup \{y\}.$$ Obviously, $|S'|=|S|=\g_p(G)-1$, $|V_i \cap S'|=|V_i\cap S|-1$ and $|V_j \cap S'|=|V_j\cap S|+1$.
Note that $G$ is a complete $t$-partite graph. For any $v\in V(G)$, we can easily find the value of $\eta_p(v,S',G)-\eta_p(v,S,G)$ by the definitions of $\eta_p(v,S',G)$ and $\eta_p(v,S,G)$ as follows: $$\eta_p(v,S',G)-\eta_p(v,S,G)=
\left\{
\begin{array}{rl}
(p-|S|+|V_i\cap S|-1)-0 & \ \ \ \mbox{if }v=x\\
-1 & \ \ \ \mbox{if }v\in V_i\setminus S\\
0-(p-|S|+|V_j\cap S|) & \ \ \ \mbox{if }v=y\\
1 & \ \ \ \mbox{if }v\in (V_j\setminus S)\setminus \{y\}\\
0 & \ \ \ \mbox{otherwise.}
\end{array}
\right.$$ Since $S$ is an $\eta_p$-set of $G$ and $|S'|=|S|$, we have $$\begin{aligned}
0&\leq& \eta_p(V(G),S',G)-\eta_p(V(G),S,G)\\
&=& \sum_{v\in V(G)}(\eta_p(v,S',G)-\eta_p(v,S,G))\\
&=& (p-|S|+|V_i\cap S|-1)-|V_i\setminus S|-(p-|S|+|V_j\cap S|)+|(V_j\setminus S)\setminus \{y\}|\\
&=& (|V_i\cap S|-|V_i\setminus S|)-(|V_j\cap S|-|V_j\setminus S|)-2.\end{aligned}$$ This means that $$(|V_i\cap S|-|V_i\setminus S|)\geq(|V_j\cap S|-|V_j\setminus S|)+2.$$ However, by the symmetry of $V_i$ and $V_j$, we can also obtain $$(|V_j\cap S|-|V_j\setminus S|)\geq(|V_i\cap S|-|V_i\setminus S|)+2$$ by applying the similar discussion. This is a contradiction, and so the claim holds. $\Box$
By **Claim**, we can assume that $I(A)=\{h\}$. From the definitions of $Y$ and $A$, we have $|Y\cup A|=|Y|+1$ and $$f(Y\cup A)=\sum_{i\in I(Y)}|V_i|+|V_h|\geq \sum_{i\in I(Y)}|V_i|+(|V_h\cap S|+1)= |S|+1=\g_p(G).$$ It follows that $Y \cup A \in \mathscr{X}$. Thus, by (\[e6\]) and the definition of $f^*(Y\cup A)$, we have $f(Y)\leq f^*(Y\cup A)$. Since $\g_p(G)>p$, $|S|=\g_p(G)-1\geq p$, and so $S$ $p$-dominates $V(G)\setminus (\cup_{i\in I(Y\cup A)}V_i)$. Therefore, by Theorem \[thm2.2\], $$\begin{aligned}
r_p(G)=\eta_p(G)=\eta_p(V(G),S,G)&=&\eta_p(V(G)\setminus S,S,G)\\
&=&\sum_{v\in V_h\setminus S}\eta_p(v,S,G)\\
&=& (p-f(Y))|V_h\setminus S|\\
&=& (p-f(Y))[|V_h|-(|S|-f(Y))]\\
&=& (p-f(Y))(f(Y\cup A)-\g_p(G)+1)\\
&\geq&(p-f^*(Y\cup A))(f(Y\cup A)-\g_p(G)+1)\\
&\geq& m.\end{aligned}$$ This completes the proof of the theorem.
For example, let $G=K_{2,2,10,17}$ and $p=11$. Then $\g_{11}(G)=12$, and so $$\mathscr{X}=\{\{17\},\{2,10\},\{2,17\},\{10,17\},\{2,2,10\},\{2,2,17\},\{2,10,17\},
\{2,2,10,17\}\}.$$ By Theorem \[thm3.4\], for any $X\in \mathscr{X}$, we have that $$f^*(X)=\left\{
\begin{array}{rl}
0 & \mbox{ if $X=\{17\},\{2,10,17\}$ or $\{2,2,10,17\}$};\\
2 & \mbox{ if $X=\{2,17\}$};\\
4 & \mbox{ if $X=\{2,2,10\}$ or $\{2,2,17\}$};\\
10 & \mbox{ if $X=\{2,10\}$ or $\{10,17\}$}.
\end{array}
\right.$$ Hence $$\begin{aligned}
r_{11}(G)&=&\min\{(11-f^*(X))(f(X)-\g_{11}(G)+1): X\in \mathscr{X}\}\\
&=& \min\{(11-f^*(X))(f(X)-11): X\in \mathscr{X}\}\\
&=& (11-f^*(\{2,10\}))(f(\{2,10\})-11)\\
&=& 1.\end{aligned}$$
Complexity
==========
Blair et al. [@bgh08], Hu and Xu [@hx10], independently, showed that the $1$-reinforcement problem is NP-hard. Thus, for any positive integer $p$, the $p$-reinforcement problem is also NP-hard since the $1$-reinforcement is a sub-problem of the $p$-reinforcement problem.
For each fixed $p$, $p$-dominating set is polynomial-time computable (see Downey and Fellows [@df95; @df97] for definitions and discussion). However, the $p$-reinforcement number problem is hard even for specific values of the parameters. In this section, we will consider the following decision problem.
**$p$-Reinforcement**
*Instance*: A graph $G$, $p\ (\geq 2)$ is a fixed integer.
*Question*: Is $r_p(G)\leq 1$?
We will prove that **$p$-Reinforcement** ($p\geq 2$) is also NP-hard by describing a polynomial transformation from the following NP-hard problem (see [@gj79]).
$$\begin{aligned}
&&\mbox{\textbf{3-Satisfiability (3SAT)}}\\
&&\mbox{\emph{Instance}: A set $U =\{u_1,\ldots, u_n\}$ of variables and a collection $\mathscr{C} = \{C_1,\ldots,C_m\}$}\\
&&\mbox{\hspace{1.75cm} of clauses over $U$ such that $|C_i | = 3$ for $i = 1, 2,\ldots,m$. } \\
&&\mbox{\hspace{1.75cm} Furthermore, every literal is used in at least one clause.}\\
&&\mbox{\emph{Question}: Is there a satisfying truth assignment for $C$?}\end{aligned}$$
\[t2\] For a fixed integer $p\geq 2$, **$p$-Reinforcement** is NP-hard.
Let $U =\{u_1,\ldots, u_n\}$ and $\mathscr{C} =\{C_1,\ldots,C_m\}$ be an arbitrary instance $I$ of **3SAT**. We will show the NP-hardness of **$p$-Reinforcement** by reducing **3SAT** to it in polynomial time. To this aim, we construct a graph $G$ as follows:
a.
: For each variable $u_i\in U$, associate a graph $H_i$, where $H_i$ can be obtained from a complete graph $K_{2p+2}$ with vertex-set $\{u_i,\overline u_i\}\cup(\cup_{j=1}^p\{v_{i_j},\overline v_{i_j}\})$ by deleting the edge-subset $\cup_{j=1}^{p-1}\{u_i \overline v_{i_j},\overline u_i v_{i_j}\}$;
b.
: For each clause $C_j\in \mathscr{C}$, create a single vertex $c_j$ and join $c_j$ to the vertex $u_i$ (resp. $\overline{u}_i$) in $H_i$ if and only if the literal $u_i$ (resp. $\overline{u}_i$) appears in clause $C_j$ for any $i\in \{1,\ldots,n\}$;
c.
: Add a complete graph $T\ (\cong K_p)$ and join all of its vertices to each $c_j$.
For convenience, let $X_i=\cup_{j=1}^p\{v_{i_j}\}$ and $\overline
X_i=\cup_{j=1}^p\{\overline v_{i_j}\}$. Then $V(H_i)=\{u_i,\overline
u_i\}\cup X_i\cup \overline X_i$. Use $H_0$ to denote the induced subgraph by $\{c_1,\cdots,c_m\}\cup V(T)$.
It is clear that the construction of $G$ can be accomplished in polynomial time. To complete the proof of the theorem, we only need to prove that $\mathscr{C}$ is satisfiable if and only if $r_p(G)=1$. We first prove the following two claims.
**Claim 1.** [*Let $D$ be a $\g_p$-set of $G$. Then $|D|=p(n+1)$, moreover, $|V(H_i)\cap D|=p$ and $|\{u_i,\overline
u_i\}\cap D|\leq 1$ for each $i\in \{1,2,\ldots,n\}$.*]{}
[**Proof of Claim 1.**]{} Suppose there is some $i\in
\{1,2,\cdots,n\}$ such that $|V(H_i)\cap D|<p$. Then there must be a vertex, say $x$, of $V(H_i)\setminus D$ such that $N_G(x)\subseteq
V(H_i)$. And so $|N_G(x)\cap D|\leq |V(H_i)\cap D|< p$, which contradicts that $D$ is a $\g_p$-set of $G$. Thus $|V(H_i)\cap
D|\geq p$ for each $i\in \{0,1,\cdots,n\}$, and so $$\label{eq4.1}
\g_p(G)=|D|=\sum_{i=0}^n|V(H_i)\cap D|\geq p(n+1).$$ On the other hand, let $$D'=\bigcup_{i=1}^n[(X_i- \{v_{i_p}\})\cup \{\overline u_i\}]\cup V(T).$$ Clearly, $|D'|=p(n+1)$ and $D'$ is a $p$-dominating set of $G$. Hence by (\[eq4.1\]), $$p(n+1)\leq \sum_{i=0}^n|V(H_i)\cap D|= \g_p(G)\leq |D'|= p(n+1),$$ which implies that $\g_p(G)=p(n+1)$ and $|V(H_i)\cap D|=p$ for each $0\leq i\leq n$. Furthermore, if $|\{u_i,\overline u_i\}\cap D|=2$ then $|(X_i\cup \overline X_i)\cap D|=p-2$. So we can choose a vertex from $X_i\cup \overline X_i$ that is not $p$-dominated by $D$. This is impossible since $D$ is a $\g_p$-set of $G$, and so $|\{u_i,\overline u_i\}\cap D|\leq 1$. The claim holds. $\square$
**Claim 2.**
*If there is an edge $e=xy\in G^c$ such that $\g_p(G+e)<\g_p(G)$, then any $\g_p$-set $D_e$ of $G+e$ satisfies the following properties.*
$(i)$
: $|V(H_i) \cap D_e|=p$ and $|\{u_i,\overline u_i\}\cap D_e|\leq 1$ for each $i\in \{1,\cdots,n\}$;
$(ii)$
: $\{c_1,\cdots,c_m\}\cap D_e=\emptyset$, and so $|V(T)\cap D_e|=p-1$;
$(iii)$
: One of $x$ and $y$ belongs to $V(T)\setminus D_e$ and the other belongs to $H\cap D_e$, where $H=\cup_{i=1}^nV(H_i)$.
**Proof of Claim 2.** Because $D_e$ is a $\g_p$-set of $G+e$ and $\g_p(G+e)<\g_p(G)$, one of $x$ and $y$ is not in $D_e$ but the other is in $D_e$. Without loss of generality, say $x\notin D_e$ and $y\in D_e$. It is clear that $|N_G(x)\cap D_e|=p-1$. Since vertex $x$ is the unique vertex not be $p$-dominated by $D_e$, we have $$\label{eq4.2}
\eta_p(V(G),D_e,G)=\eta_p(x,D_e,G)=p-(p-1)=1.$$ Let $$D=D_e\cup \{x\}.$$ Then $D$ is a $p$-dominating set of $G$ and $|D|=|D_e|+1=\g_p(G+e)+1\leq\g_p(G)$. That is, $D$ is a $\g_p$-set of $G$. By Claim 1, $$\label{eq4.3}
|V(H_i)\cap D|=p \mbox{ for each $i=0,1,\cdots,n$},$$ and $|\{u_i,\overline u_i\}\cap D_e|\leq |\{u_i,\overline u_i\}\cap D|\leq 1$ for $1\leq i\leq n$.
Suppose that there exists some $i\in \{1,\cdots,n\}$ such that $|V(H_i)\cap D_e|\neq p$. Then by (\[eq4.3\]), $x\in V(H_i)$ and $|V(H_i)\cap D_e|=p-1$. Thus every vertex in $(X_i\cup \overline
X_i)\setminus (D_e\cup\{x\})$ is dominated by at most $p-1$ vertices of $D_e$. Hence by $|X_i\cup \overline X_i|=2p$, $$\eta_p(V(G),D_e,G)\geq\eta_p(X_i\cup \overline X_i,D_e,G)
\geq |(X_i\cup \overline X_i)\setminus D_e|-1\geq 2p-(p-1)-1>1,$$ which contradicts with (\[eq4.2\]). Hence $(i)$ holds.
Suppose that there is some $j\in \{1,\cdots,m\}$ such that $c_j\in
D_e$. By $(i)$ and (\[eq4.3\]), $x\in V(H_0)$ and so $|V(H_0)\cap
D_e|=|V(H_0)\cap D|-1=p-1$. Hence $|V(T)\cap D_e|\leq p-2$ by $V(H_0)=\{c_1,\cdots,c_m\}\cup V(T)$. Since each vertex of $T\
(\cong K_p)$ has exact $p-1$ neighbors in $D_e$, $$\eta_p(V(G),D_e,G)\geq \eta_p(V(T),D_e,G)=|V(T)\setminus D_e|=p-|V(T)\cap D_e|\geq 2.$$ This contradicts with (\[eq4.2\]). Thus $\{c_1,\cdots,c_m\}\cap
D_e=\emptyset$, and so $|V(T)\cap D_e|=|V(H_0)\cap D_e|=p-1$. Hence $(ii)$ holds.
By $(ii)$, $T$ has a unique vertex, say $z$, not in $D_e$. From $|N_G(z) \cap D_e|=|V(H_0)\cap D_e|=p-1$, the vertex $z$ is not $p$-dominated by $D_e$. However, $x$ is the unique vertex not be $p$-dominated by $D_e$ in $G$ by (\[eq4.2\]). Thus $z=x$, and so $x=z\in V(T)\setminus D_e$. By the construction of $G$ and $xy\in
G^c$, it is clear that $y\in (\cup_{i=1}^nV(H_i))\cap D_e$. Hence $(iii)$ holds. $\Box$
We now show that $\mathscr{C}$ is satisfiable if and only if $r_p(G)=1$.
If $\mathscr{C}$ is satisfiable, then $\mathscr{C}$ has a satisfying truth assignment $t: U\rightarrow \{T,F\}$. According to this satisfying assignment, we can choose a subset $S$ from $V(G)$ as follows: $$S=S_0\cup S_1\cup\cdots\cup S_n,$$ where $S_0$ consists of $p-1$ vertices of $T$ and $$S_i=\left\{
\begin{array}{ll}
u_i\cup (\overline X_i- \{\overline v_{i_p}\}) & \mbox{ if $t(u_i)=T$}\\
\overline u_i\cup (X_i- \{v_{i_p}\}) & \mbox{ if $t(u_i)=F$}
\end{array}
\mbox{ for each $i\in \{1,\cdots,n\}$.}
\right.$$ It can be verified easily that $|S|=p(n+1)-1=\g_p(G)-1$ and $\cup_{i=1}^nV(H_i)$ can be $p$-dominated by $S$. Since $t$ is a satisfying true assignment for $\mathscr{C}$, each clause $C_j\in
\mathscr{C}$ contains at least one true literal. That is, the corresponding vertex $c_j$ has at least one neighbor in $\{u_1,\br
u_1\cdots,u_n,\bar u_n\}\cap S$ by the definitions of $G$ and $S$, and so every $c_j\in \{c_1,\cdots,c_m\}$ has at least $p$ neighbors in $S$ since $S_0\subseteq N_G(c_j)$. Note that the unique vertex in $V(T)\setminus S_0$ has exact $p-1$ neighbors in $S$. By Theorem \[thm2.2\] and $|S|=\g_p(G)-1$, $$r_p(G)=\eta_p(G)\leq \eta_p(V(G),S,G)=\eta_p(V(T)\setminus S_0,S,G)=p-(p-1)=1.$$ Furthermore, we have $r_p(G)=1$ since $\g_p(G)>p$ by Claim 1.
Conversely, assume $r_p(G)= 1$. That is, there exists an edge $e=xy$ in $G^c$ such that $\g_p(G+e)<\g_p(G)$. Let $D_e$ be a $\g_p$-set of $G+e$. Define $t: U\to \{T,F\}$ by $$\label{eq4.4}
t(u_i)=\left\{
\begin{array}{ll}
T \ & \mbox{ if vertex}\ u_i\in D_e \\
F \ & \mbox{ if vertex}\ u_i\notin D_e
\end{array}
\right.
\ \mbox{for }i=1,\cdots,n.$$
We will show that $t$ is a satisfying truth assignment for $\mathscr{C}$. Let $C_j$ be an arbitrary clause in $\mathscr{C}$. By $(ii)$ and $(iii)$ of Claim 2, the corresponding vertex $c_j$ is not in $D_e$ and $|N_G(c_j)\cap D_e|\geq p$ since $c_j\notin \{x,y\}$. Then there must be some $i\in \{1,\cdots,n\}$ such that $$\label{eq4.5}
|\{u_i,\overline u_i\}\cap N_G(c_j)\cap D_e|=1,$$ since $T$ contains exact $p-1$ vertices of $D_e$ by $(i)$ and $(ii)$ of Claim 2. If $u_i\in N_G(c_j)\cap D_e$, then $u_i\in C_j$ and $t(u_i)=T$ by the construction of $G$ and (\[eq4.4\]). If $\overline u_i\in N_G(c_j)\cap D_e$, then the literal $\overline u_i$ belongs to $C_j$ by the construction of $G$. Note that $u_i\notin D_e$ from $\overline u_i\in D_e$ and $(i)$ of Claim 2. This means that $t(u_i)=F$ by (\[eq4.4\]). Hence $t(\overline u_i)=T$. The arbitrariness of $C_j$ with $1\le j\le m$ shows that all the clauses in $\mathscr{C}$ is satisfied by $t$. That is, $\mathscr{C}$ is satisfiable.
The theorem follows.
Upper Bounds
============
For a graph $G$ and $p=1$, Kok and Mynhardt [@km90] provided an upper bound for $r(G)$ in terms of the smallest private neighborhood of a vertex in some $\g$-set of $G$. Let $X\subseteq V(G)$ and $x\in X$. The [*private neighborhood*]{} of $x$ with respect to $X$ is defined as the set $$\label{e4.1}
PN(x,X,G)=N_G[x]\setminus N_G[X\setminus\{x\}].$$ Set $$\mu(X,G)=\min\{|PN(x,X,G)|:\ x\in X\}$$ and $$\label{e4.2}
\mbox{$\mu(G)=\min\{\mu(X,G):\ X$ is a $\g$-set of $G\}$.}$$ Using this parameter, Kok and Mynhardt [@km90] showed that $r(G)\leq \mu(G)$ if $\g(G)\geq 2$ with equality if $\g(G)=1$. We generalize this result to any positive integer $p$.
In order to state our results, we need some notations. Let $X\subseteq V(G)$ and $x\in
X$. A vertex $y\in \overline X$ is called a [*$p$-private neighbor*]{} of $x$ with respect to $X$ if $xy\in E(G)$ and $|N_G(y)\cap X|=p$. The [*$p$-private neighborhood*]{} of $x$ with respect to $X$ is defined as $$\label{e4.3}
PN_p(x,X,G)=\{y:\ y \mbox{ is a $p$-private neighbor of $x$ with respect to $X$}\}.$$ Let $$\begin{aligned}
\mu_p(x,X,G)&=&|PN_p(x,X,G)|+\max\{0,p-|N_G(x)\cap X|\},\label{e4.4}\\
\mu_p(X,G)&=&\min\{\mu_p(x,X,G) : x\in X\}, \mbox{\ \ and}\label{e4.5}\\
\mu_p(G)&=&\min\{\mu_p(X,G):\ X \mbox{ is a $\g_p$-set of $G$}\}\label{e4.6}.\end{aligned}$$
\[thm4.1\] For any graph $G$ and positive integer $p$, $$r_p(G)\leq \mu_p(G)$$ with equality if $r_p(G)=1$.
If $\g_p(G)\leq p$, then $r_p(G)=0\leq \mu_p(G)$ by our convention. Assume that $\g_p(G)\geq p+1$ below. Let $X$ be a $\g_p$-set of $G$ and $x\in X$ such that $$\mu_p(G)=\mu_p(X,G)=\mu_p(x,X,G).$$ Since $|X|=\g_p(G)\geq p+1$, we can choose a vertex, say $u_y$, from $X\setminus N_G(y)$ for each $y\in PN_p(x,X,G)$, and a subset $X'$ with $|X'|=\max\{0,p-|N_G(x)\cap
X|\}$ from $X\setminus N_G[x]$. Let $$G'=G+\{yu_y :\ y\in PN_p(x,X,G)\}+\{xv :\ v\in X'\}.$$ Obviously, $X\setminus\{x\}$ is a $p$-dominating set of $G'$, which implies that $$r_p(G)\leq |PN_p(x,X,G)|+|X'|=\mu_p(x,X,G)=\mu_p(G).$$
Assume $r_p(G)=1$. Then $\g_p(G)\geq p+1$ and there exists an edge $xy\in E(G^c)$ such that $\g_p(G+xy)=\g_p(G)-1$. Let $G'=G+xy$ and $X$ be a $\g_p$-set of $G'$. Without loss of generality, assume that $x\in X$ and $y\in \overline X$. Clearly, $y$ is a $p$-private neighbor of $x$ with respect to $X$ in $G$ and $X\cup \{y\}$ is a $\g_p$-set of $G$, which implies $$\mbox{$PN_p(y,X\cup \{y\},G)=\emptyset$ and $p-|N_G(y)\cap (X\cup \{y\})|=1$,}$$ that is, $\mu_p(y,X\cup
\{y\},G)=1$. It follows that $$r_p(G)\leq \mu_p(G)\leq \mu_p(X\cup \{y\},G)\leq \mu_p(y,X\cup
\{y\},G)=1.$$ Thus, $r_p(G)=\mu_p(G)=1$. The theorem follows.
Note that $|PN_p(x,X,G)|\leq deg_G(x)$ for any $X\subseteq V(G)$ and $x\in X$. By Theorem \[thm4.1\], we obtain the following corollary immediately.
\[cor4.1\] For any graph $G$ with maximum degree $\Delta(G)$ and positive integer $p$, $r_p(G)\leq \Delta(G)+p$.
\[cor4.2\] Let $p$ be a positive integer and $G$ be a graph with minimum degree $\delta(G)$. If $\delta(G)< p$, then $r_p(G)\leq \delta(G)+p$.
Let $X$ be a $\g_p$-set of $G$ and $x\in V(G)$ with degree $\delta(G)$. Since $deg_G(x)=\delta(G)<p$, $x\in X$ by Observation \[obs1.2\]. Note that $|PN_p(x,X,G)|\leq
deg_G(x)=\delta(G)$ and $p-|N_G(x)\cap X|\leq p$. By Theorem \[thm4.1\], $$\begin{aligned}
r_p(G)&\leq& \mu_p(G)\\
&\leq& \mu_p(x,X,G)\\
&=&|PN_p(x,X,G)|+\max\{0,p-|N_G(x)\cap X|\}\\
&\leq& \delta(G)+p.
\end{aligned}$$ The corollary follows.
Consider $p=1$. Let $X\subseteq V(G)$ and $x\in X$. If $x$ is not an isolated vertex of the induced subgraph $G[X]$, then $PN(x,X,G)$ defined in (\[e4.1\]) does not contain $x$ and $\max\{0,1-|N_G(x)\cap X|\}=0$ in (\[e4.4\]). Otherwise, $PN(x,X,G)$ contains $x$ and $\max\{0,1-|N_G(x)\cap X|\}=1$. Notice that $PN_1(x,X,G)$ defined in (\[e4.3\]) does not contain $x$. Hence, by (\[e4.5\]), $$\mu_1(x,X,G)=PN_1(x,X,G)+\max\{0, 1-|N_G(x)\cap X|\}=|PN(x,X,G)|.$$ This fact means that $\mu(G)$ defined in (\[e4.2\]) is a special case of $p=1$ in (\[e4.6\]), that is, $\mu_1(G)=\mu(G)$. Thus, by Theorem \[thm4.1\], the following corollary holds immediately.
\[cor4.3\][(Kok and Mynhardt [@km90])]{.nodecor} For any graph $G$ with $\g(G)\geq 2$, $r(G)\leq\mu(G)$, with equality if $r(G)=1$.
[99]{}
M. Blidia and M. Chellali, O. Favaron, Independence and 2-domination in trees. *Austral. J. Combin.* 33 (2005) 317-327.
M. Blidia, M. Chellali and L. Volkmann, Some bounds on the $p$-domination number in trees. *Discrete Math.* 306 (2006) 2031-2037.
J.R.S. Blair, W. Goddard, S.T. Hedetniemi, S. Horton, P. Jones and G. Kubicki, On domination and reinforcement numbers in trees. *Discrte Math.* 308 (2008) 1165-1175.
M. Chellali, O. Favaron, A. Hansberg and L. Volkmann, $k$-domination and $k$-independence in graphs: A survey. *Graphs $\&$ Combin.* doi 10.1007/s00373-011-1040-3.
Y. Caro and Y. Roditty, A note on the $k$-domination number of a graph, Internat. *J. Math. Sci.* 13 (1990) 205-206.
X. Chen, L. Sun and D. Ma, Bondage and reinforcement number of $\g_f$ for complete multipartite graph, J. Beijin Inst. Technol. 12 (2003) 89-91.
J. E. Dunbar, T. W. Haynes, U. Teschner and L. Volkmann, Bondage, insensitivity, and reinforcement. Domination in Graphs: Advanced Topics (T. W. Haynes, S. T. Hedetniemi, P. J. Slater eds.), 471-489, Monogr. Textbooks Pure Appl. Math., 209, Marcel Dekker, New York, (1998).
G.S. Domke and R.C. Laskar, The bondage and reinforcement numbers of $\g_f$ for some graphs. *Discrete Math.* 167/168 (1997) 249-259.
R.G. Downey, M.R. Fellows, Fixed-parameter tractability and completeness I: Basic results. SIAM J. Comput. 24 (1995), 873-921.
R.G. Downey, M.R. Fellows, Fixed-parameter tractability and completeness II: On completeness for $W[1]$. Theoretical Computer Science, 54 (3) (1997), 465-474.
O. Favaron, On a conjecture of Fink and Jacobson concerning $k$-domination and $k$-dependence. *J. Combin. Theory Ser. B* 39 (1985) 101-102.
J. F. Fink and M. S. Jacobson, $n$-domination in graphs. Graph Theory with Applications to Algorithms and Computer Science (Y. Alavi, A. J. Schwenk eds), 283-300, Wiley, New York, (1985).
M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco, (1979).
T. W. Haynes, S. T. Hedetniemi and P. J. Slater, Fundamentals of Domination in Graphs, New York, Marcel Deliker, (1998).
T. W. Haynes, S. T. Hedetniemi and P. J. Slater, Domination in Graphs: Advanced Topics, New York, Marcel Deliker (1998).
M.A. Henning, N.J. Rad and J. Raczek, A note on total reinforcement in graph. *Discrete Appl. Math.* 159 (2011) 1443-1446.
F.-T. Hu and J.-M. Xu, On the Complexity of the Bondage and Reinforcement Problems. Journal of Complexity (2011), doi:10.1016/j.jco.2011.11.001.
J. Huang, J.W. Wang and J.-M. Xu, Reinforcement number of digraphs. *Discrete Appl. Math.* 157 (2009) 1938-1946.
J. Kok and C.M. Mynhardt, Reinforcement in graphs. *Congr. Numer.* 79 (1990) 225-231.
N. Sridharan, M.D. Elias and V.S.A. Subramanian, Total reinforcement number of a graph. *AKCE Int. J. Graph Comb.* 4 (2) (2007) 192-202.
J.-M. Xu, Theory and Application of Graphs. Kluwer Academic Publishers, Dordrecht/Boston/London, 2003.
J.H. Zhang, H.L. Liu and L. Sun, Independence bondage and reinforcement number of some graphs. Trans. Beijin Inst. Technol. 23 (2003) 140-142.
[^1]: Corresponding author: xujm@ustc.edu.cn (J.-M. Xu)
[^2]: The work was supported by NNSF of China (No.10711233) and the Fundamental Research Fund of NPU (No. JC201150)
|
---
abstract: 'We review our new method, which might be the most direct and efficient way for approaching the continuum physics from Hamiltonian lattice gauge theory. It consists of solving the eigenvalue equation with a truncation scheme preserving the continuum limit. The efficiency has been confirmed by the observations of the scaling behaviors for the long wavelength vacuum wave functions and mass gaps in (2+1)-dimensional models and (1+1)-dimensional $\sigma$ model even at very low truncation orders. Most of these results show rapid convergence to the available Monte Carlo data, ensuring the reliability of our method.'
address:
- |
CCAST (World Laboratory), P.O. Box 8730, Beijing 100080, China,\
and Department of Physics, Zhongshan University, Guangzhou 510275, China
- |
HLRZ, Forschungszentrum, D-52425 J[ü]{}lich, Germany,\
and Deutsches Elektronen-Synchrotron DESY, D-22603 Hamburg, Germany
- 'School of Physics, University of New South Wales, Kensington New South Wales 2033, Australia '
author:
- 'Shuo-Hong Guo, Qi-Zhou Chen, Xiyang Fang, Jinming Liu, Xiang-Qian Luo, and Weihong Zheng'
title: Truncated eigenvalue equation and long wavelength behavior of lattice gauge theory
---
INTRODUCTION
============
The main purpose of our work is to approach the scaling region and extract physical results by analytic calculations. For a lattice calculation, a basic requirement is that for weak enough coupling, the dimensionless quantities should satisfy the scaling law, predicted by renormalization group equation. For $\rm{SU(N_c)}$ gauge theories in 3 dimensions, superrenormalizabilty and dimensional analysis tell us that the dimensionless masses $aM$ should scale as $$\begin{aligned}
{aM \over g^2} \to {M \over e^2}.
\label{s2p1}\end{aligned}$$ For (2+1)-dimensional compact [U(1)]{} and (3+1)-dimensional non-abelian gauge theories, $aM$ should scale exponentially as $$\begin{aligned}
{aM} \to exp(-b/g^2).
\label{s3p1}\end{aligned}$$ If the calculated $M$ data converge to a stable value, we can get an estimate for the mass.
There have been various analytic methods available in the literature (for a review see [@Guo]). The main difficulty of the conventional methods (e.g. strong coupling expansion) is that they converge very slowly and very higher order $1/g^2$ calculations are required to extend the results to the intermediate coupling region. Unfortunately, high order calculations are difficult in practice.
Recently, we proposed a new method [@GCL; @CGZF; @QCD3; @MASS] for Hamiltonian lattice gauge theory. This method consists of solving the eigenvalue equation with a suitable truncation scheme preserving the continuum limit. Even at low order truncation, clear scaling windows for the physical quantities in most cases have been established, and the results are in perfect agreement with the available Monte Carlo data. Here we review only the work on $\rm{U(1)}_3$, $\rm{SU(2)}_3$ and 2 dimensional $\sigma$ model, while that for [SU(3)]{} has been summarized in [@Luo].
THE METHOD
==========
The Schr[ö]{}dinger equation $H \vert \Omega \rangle = \epsilon_{\Omega} \vert \Omega \rangle$ on the Hamiltonian lattice for the ground state $$\begin{aligned}
\vert \Omega \rangle = exp \lbrack R(U) \rbrack \vert 0 \rangle
\label{b1}\end{aligned}$$ and vacuum energy $\epsilon_{\Omega}$ can be reformulated as $$\begin{aligned}
\sum_{l} \lbrace [E_l,[E_l,R(U)]]+[E_l,R(U)][E_l,R(U)] \rbrace\end{aligned}$$ $$\begin{aligned}
- {2 \over g^4} \sum_{p} tr(U_p+U_{p}^{\dagger})
={2a \over g^2} \epsilon_{\Omega}.
\label{schr}\end{aligned}$$ To solve this equation, let us write $R(U)$ in order of graphs $G_{n,i}$, i.e., $ R(U)=\sum_{n} R_{n}(U)=\sum_{n,i} C_{n,i} G_{n,i}(U)$. Substituting it to (\[schr\]), we have the $N$th order truncated eigenvalue equation $$\begin{aligned}
\sum_{l} \lbrace [E_l,[E_l,\sum_{n}^{N} R_{n}(U)]]\end{aligned}$$ $$\begin{aligned}
+\sum_{n_1+n_2 \le N}[E_l,R_{n_1}(U)][E_l,R_{n_2}(U)] \rbrace\end{aligned}$$ $$\begin{aligned}
- {2 \over g^4} \sum_{p} tr(U_p+U_{p}^{\dagger})
={2a \over g^2} \epsilon_{\Omega}.
\label{b2}\end{aligned}$$ By taking the coefficients of the graphs $G_{n,i}$ in this equation to zero, we obtain a set of non-linear algebra equations, from which $C_{n,i}$ are determined. The similar method applies to the eigenvalue equation for the mass and its wave function [@MASS]. Therefore, solving lattice field theory is reduced to solving the algebra equations.
The lowest order graph is quite simple: $R_1(U)=C_{1,1}(U_p +h.c.)$ The first term in (\[b2\]) doesn’t generate new graphs, but the second term does, i.e. $$\begin{aligned}
[E_l,G_{n_1}(U)] \in R_{n}(U) + lower ~ orders,\end{aligned}$$ $$\begin{aligned}
[E_l,G_{n_1}(U)][E_l,G_{n_2}(U)] \in R_{n_1+n_2}(U)\end{aligned}$$ $$\begin{aligned}
+lower ~ orders.\end{aligned}$$
Two questions arise:
1\) Should all the new graphs generated by the second term in (\[b2\]) be taken as independent graphs of order $n_1+n_2$? For abelian gauge theories, the answer is yes. For non-abelian gauge theories, because of the uni-modular conditions [@QCD3; @MASS; @Luo; @GCFC], there is a mixing problem not only for the graphs of the same order, but also for graphs of different orders. The classification for independent graphs is particularly complecate for [SU(3)]{}.
2\) For $n_1+n_2 > N$, should we keep the lower order graphs in $[E_l,G_{n_1}(U)][E_l,G_{n_2}(U)]$? To preserve the correct limit, at $Nth$ order truncation, one should to DROP all these graphs. This is the essential feature of our truncation scheme, which differs sufficiently from the scheme in [@Green].
There have also been some other truncation schemes proposed in [@SW; @Bishop]. One of their major problems is the violation of the long wavelength structure or continuum limit of the equation, and consequently the violation of the scaling law (\[s2p1\]) or (\[s3p1\]) for the physical quantities.
Let’s see further why the equation (\[b2\]) should be truncated in the way suggested at point 2). The continuum limit of a graph $G_{n,i}(U)$ is $$\begin{aligned}
G_{n,i}(U)=e^2 a^4[A_{n,i} ~ tr ({\cal F}^2)
\label{small_a}\end{aligned}$$ $$\begin{aligned}
+a^2 B _{n,i} ~ tr ({\cal D} {\cal F})^2+...]\end{aligned}$$ with ${\cal F}$ the field strength tensor and ${\cal D}$ the covariant derivative. It has been generally proven [@GCL] that in the continuum limit the second term of (\[b2\]) term should behave as $$\begin{aligned}
[E_l,G_{n_1}(U)][E_l,G_{n_2}(U)]
\propto e^2 a^6 ~Tr({\cal D} {\cal F}_{\mu,\nu})^2.\end{aligned}$$ To preserve this correct limit, when the equation (\[b2\]) is truncated to the $Nth$ order, all the graphs created by $[E_l,R_{n_1}(U)][E_l,R_{n_2}(U)]$ for $n_1+n_2 \le N$ must be considered. On the other hand, all the graphs created by this term for $n_1+n_2 > N$ should be dropped, even there are lower order graphs. Otherwise the partial sum of the lower order graphs would make this term behave in a considerably different (wrong) way.
RESULTS
=======
Once the coefficients $C_{n,i}$ are obtained by solving (\[b2\]), we can use (\[b1\]) and (\[small\_a\]) to compute the parameters $\mu_0$ and $\mu_2$ in the vacuum wave function for the long wavelength configurations $U$ [@Arisue] $$\begin{aligned}
\vert \Omega \rangle=exp \lbrack - {\mu_0} \int d^{D-1}x ~ tr {\cal F}^2\end{aligned}$$ $$\begin{aligned}
- {\mu_2} \int d^{D-1}x ~tr ({\cal D} {\cal F})^2 \rbrack.
\label{a1}\end{aligned}$$
=7.5cm
The results for $\mu_0$ and $\mu_2$ in $3$ dimensional [SU(2)]{} gauge theory are shown in Fig. 1. Empressively, nice scaling behavior is obtained even at $N=3$, and the data for $4/g^2 > 4$ are in good agreement with the Monte Carlo measurements [@Arisue]. The order $N=4$ data are also included in this figure. Although there are no big differences between the results at $N=3$ and $N=4$, higher order calculation seems necessary to ensure that the results would finally occur at the correct values.
Figure 2 shows our results for the mass gap in $\rm{SU(2)}_3$ at different truncation orders. For comparison, the results from the truncation method of Llewellyn Smith and Watson (LS-W) [@SW] and $14th$ order series expansion [@Zheng] are also included. Again, even at $N=3$, with satisfaction of the scaling law, our results are in best agreement with the data taken from the continuum limit of the Monte Carlo data [@Teper].
=7.5cm
As is mentioned above, for non-abelian gauge theories, the uni-modular conditions lead to the existence of different choices for independent graphs. In [SU(2)]{}, because of $trU_p^{\dagger}=trU_p$, all the disconnected graphs can be transformed into the connected ones, used as an independent set of the graphs. Our results in Figs. 1 and 2 are from such a choice, while the comparison between different choices (connected, disconnected [@GCL] and inverse) has been made in [@GCFC]. Of course, one is free to choose arbitrary set of independent graphs. A criterion for a good choice is that it is convergent more rapidly to the continuum limit than other ones at lower order truncation. This has been obviously demonstrated for $\rm{SU(3)}$ [@Luo; @ASSYM].
The most intriguing scaling law is the exponential scaling (\[s3p1\]). Before investigating [QCD]{} in 3+1 dimensions, we would like to test our method in a (2+1)-dimensional compact [U(1)]{} model, which has many properties of the realistic theory. Here there is no ambiguity induced by uni-modular conditions in $\rm{SU(N_c)}$ theories. Because the nature of the abelian group greatly simplifies the calculations, we can easily write a program at arbitary orders.
=7.5cm
=7.5cm
The results for $\rm{U(1)}_3$ are shown in Figs. 3 and 4 respectively. At relatively low truncation orders (comparing to 16th order series expansion [@Zheng]), a scaling window has been seen around $1/g^2=1$, and such a window becomes wider with the order $N$. Most impressively, there is an obvious tendency to converge to the scaling curve (dash line). This implies that the obtained physical quantities occur at their correct values. Fitting the N=6 data in the scaling region, we get [@U1]
$$\begin{aligned}
{\mu_{0} \over ag} =3.1120 \times 10^{-2} exp(2.54(3)/g^2),\end{aligned}$$
$$\begin{aligned}
(M_{A} a g)^2 =365(73) exp( -5.0(2)/g^2),
\end{aligned}$$
or $\mu_0 M_A=0.59(5)$.
The non-linear (1+1)-dimensional $\sigma$ model is another interesting application of our method. According to the theoretical expectation,
$$\begin{aligned}
M a \propto {1 \over g^2} exp(-{2 \pi \over g^2} -{g^2 \over 8 \pi}).
\end{aligned}$$
Calculations at $N=5,6,7,8$ have been carried out. Preliminary results are quite encouraging, and they are in reasonable agreement with the Monte Carlo data [@Ha]. To reach the asymptotic scaling region, higher order calculation seems necessary.
SUMMARY
=======
The reason for the success of our method is that the truncated eigenvalue equation preserves the continuum limit. The results for (2+1)-dimensional models and (1+1)-dimensional $\sigma$ models are presented to support the efficiency and reliability of our method. In conclusion, the eigenvalue equation with a proper truncation scheme may be the most direct and efficient way for extracting the continuum physics.
The members at Zhongshan are supported by Inst. High Education, and XQL is sponsored by DESY. We thank the discussions with H. Arisue, P. Cai, C. Hamer, D. Sch[ü]{}tte and A. Sequ[í]{}.
[9]{} S.H. Guo, Advances in Science of China, Phys. [**2**]{} (1987) 27.
S.H. Guo, Q.Z. Chen, and L. Li, Phys. Rev. D[**49**]{} (1994) 507.
Q.Z. Chen, S.H. Guo, W.H. Zheng, and X.Y. Fang, Phys. Rev. D[**50**]{} (1994) 3564.
Q.Z. Chen, X.Q. Luo, and S.H. Guo, Phys. Lett. B[**341**]{} (1995) 349.
Q.Z. Chen, X.Q. Luo, S.H. Guo and X.Y. Fang, Phys. Lett. B[**348**]{} (1995) 560.
Q.Z. Chen, S.H. Guo, X.Q. Luo, and A.J Sequ[í]{}-Santonja, these proceedings.
J. Greensite, Nucl. Phys. B[**166**]{} (1980) 113.
C.H. Llewellyn Smith and N.J. Watson, Phys. Lett. B[**302**]{} (1993) 463.
R. Bishop, A. Kendall, L. Wong, and Y. Xian, Phys. Rev. D[**48**]{} (1993) 887.
H. Arisue, Phys. Lett. B[**280**]{} (1992) 85.
C. Hamer, J. Oitmaa, W.H. Zheng Phys. Rev. [**D45**]{} (1992) 4652.
M. Teper, Nucl. Phys. B(Proc. Suppl.) [**30**]{} (1993) 529.
S.H. Guo, Q.Z. Chen, X.Y. Fang and P.F. Cai, Zhongshan University Preprint.
Q.Z. Chen, X.Q. Luo, and S.H. Guo, [**HLRZ-95-10**]{}.
X.Y. Fang, J.M. Liu, and S.H. Guo, Zhongshan University Preprint.
A. Hasenfratz and A. Margaritis, Phys. Lett. [**B148**]{} (1984) 129.
|
---
abstract: 'The deceleration parameter $q$ as the diagnostic of the cosmological accelerating expansion is investigated. By expanding the luminosity distance to the fourth order of redshift and the so-called $y$-redshift in two redshift bins and fitting the SNIa data (Union2), the marginalized likelihood distribution of the current deceleration parameter shows that the cosmic acceleration is still increasing, but there might be a tendency that the cosmic acceleration will slow down in the near future. We also fit the Hubble evolution data together with SNIa data by expanding the Hubble parameter to the third order, showing that the present decelerating expansion is excluded within $2\sigma$ error. Further exploration on this problem is also approached in a non-parametrization method by directly reconstructing the deceleration parameter from the distance modulus of SNIa, which depends neither on the validity of general relativity nor on the content of the universe or any assumption regarding cosmological parameters. More accurate observation datasets and more effective methods are still in need to make a clear answer on whether the cosmic acceleration will keep increasing or not.'
address: 'Key Laboratory of Frontiers in Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190, China'
author:
- 'Rong-Gen Cai [^1], Zhong-Liang Tuo [^2]'
title: '**Detecting the cosmic acceleration with current data**'
---
INTRODUCTION {#sec1}
============
More than ten years has passed since two independent groups discovered that our universe entered a stage of accelerating expansion using the Type Ia Supernova (SNIa) data [@SNIa]. During the past years, many observations embrace the results, such as large scale structure [@BAO], the cosmic microwave background (CMB) radiation [@CMB], and so on. All of these data strongly suggest that we live in an accelerating universe, with the deceleration parameter $q<0$ at low redshift, and an exotic component named dark energy is introduced to interpret this phenomena [@DE]. As more and more observational data released, the cosmic expansion history can be more accurately determined [@history; @Daly].
Besides the eternal acceleration scenario, such as the well-known $\Lambda$CDM model, some authors have proposed that a matter dominated decelerating expansion will resume soon after the acceleration starts, because the vacuum energy’s anti-gravitational properties will reversed [@Barrow]. Using the CPL parametrization [@CPL], Shafieloo et al. [@slowing] recently found that the cosmic acceleration might be slowing down. This question is interesting, since it challenges the $\Lambda$CDM model, which fits most of the current data very well and predicts an eternal cosmic acceleration. But this result is far from final judgement. First, different parametrization methods of dark energy equation of state will show different influences on the evolution behavior of the universe [@difference1], and the CPL ansatz might be unable to fit the data simultaneously at low and high redshifts. Second, some tensions also might exist in the SNIa dataset which may lead to different results in the joint analysis [@Li; @tension], with one suggesting a decreasing acceleration, others just the opposite. In order to avoid these disadvantages, one can consider the so-called cosmographic approach [@cosmographic], by directly parameterizing the deceleration parameter, such as linear expansion $q(z)=q_0+q_1z$ [@qz1], or other two-parameter model $q(z)=\frac{1}{2}+\frac{q_1z+q_2}{(1+z)^2}$ [@qz2]. And some authors discovered that the universe might have already entered a decelerating expansion era by using only low (for example $z<0.1$ or $z<0.2$) redshift SNIa data [@Wu2010; @Guimaraes].
In this paper, we focus on the deceleration parameter as the diagnostic of the cosmological accelerating expansion. Following the approach introduced in [@Guimaraes], we investigate the deceleration parameter by expanding the luminosity distance to the fourth order of redshift and the so-called $y$-redshift [@yredshift], and fitting the latest SNIa sample (Union2) released by the Supernova Cosmology Project (SCP) Collaboration [@Union2], which contains 557 data including recent large samples from other surveys and uses SALT2 for SNIa light-curve fitting. But instead of cutting off the data at low redshift, we use the sub-samples with $z<1.0$ throughout the paper, in order to circumvent the convergence issues discussed in [@convergence] and to make our discussion physically meaningful, while applying some other technique to reduce the accumulation of systematic error, also we introduce some physical constraint conditions to make our constraint more robust [@latest]. To make an overall comparison, the Hubble parameter $H(z)$ is also expanded to the third order of the redshift and the $y$-redshift and fitted to the Hubble evolution data [@Hubble; @Hubble2]. This constitutes the second part of the paper. In Sec. 3, we propose a non-parametrization method to reconstruct the deceleration parameters at differen redshifts directly from the SNIa data, and this method is independent of the calibration of the SNIa data. Sec. 4 contains our conclusion.
COSMOGRAPHIC EVALUATION {#sec2}
=======================
Methodology
-----------
We adopt the assumption that the universe has a flat Friedmann-Robertson-Walker(FRW) metric and then the scale factor can be approximated by the first several terms in a Taylor series; i.e., as a fifth order polynomial [@expansion]. The coefficients in this expansion are considered as the parameters of the theory. Since one more term taken will greatly reduce the constraint ability of the parameter (see Table 1 in the first paper of [@xia]) and four-parameter series can give a better approximation to the Hubble parameter and the distance modulus than other expansions [@latest]. Therefore we take the first four terms only in our analysis.
Accordingly, the luminosity distance $d_L(z)$ can be expanded to the fourth order of redshift [@cosmographic; @Guimaraes; @expansion], which is a generalized form of the Hubble law, $$\begin{aligned}
d_{L}(z)&=&\frac{1}{H_{0}}[z+\frac{1}{2}(1-q_{0})z^{2}-\frac{1}{6}(1-q_{0}-3q_{0}^{2}+j_{0})z^3{\nonumber}\\
&&+\frac{1}{24}(2-2q_0-15q_0^{2}-15q_0^{3}+10q_0j_0+5j_0+s_0)z^4]+\mathcal{O}(z^5)\,,
\label{eq:dlz}\end{aligned}$$ where the coefficients are expressed with $H_0, q_0, j_0$, and $s_0$, the present values of Hubble, deceleration, jerk and snap parameters, respectively. This kinematic method can gives valuable information regarding the expansion rates of the universe, thus provides us with a testing ground for all cosmological solutions. To circumvent the convergence issues as discussed in [@convergence], we choose the redshift cutoff $z<1.0$ with 537 SNIa events involved.
A so-called $y$-redshift is introduced in [@convergence], where $y=\frac{z}{1+z}$. In this case, the luminosity distance can be expanded as $$\begin{aligned}
d_{L}(y)&=&\frac{1}{H_{0}}[y+\frac{1}{2}(3-q_{0})y^{2}+\frac{1}{6}(11-5q_{0}+3q_{0}^{2}-j_{0})y^3{\nonumber}\\
&&+\frac{1}{24}(50-26q_0+21q_0^{2}-15q_0^{3}+10q_0j_0-7j_0+s_0)y^4]+\mathcal{O}(y^5)\,.
\label{eq:dly}\end{aligned}$$ This expansion converges faster than the expansion with the redshift $z$ and the systematic error in this case is much smaller than the former case. This expansion has been extensively studied in [@Guimaraes].
In principle, the luminosity distance $d_L$ at low redshift $(z<1)$ can be exactly expressed by the infinite power series expansion of $z$ or $y$, yet with infinite parameters involved. In practices, we choose the first four terms of the Taylor series as an approximation, which leads to the truncation error in our analysis. This is considered as the systematic error. The systematic error in (\[eq:dly\]) is relatively larger compared with the Union2 data. For example, when $y=1/2$, the systematic error in \[eq:dly\] is about $3.1\%$, while it is $1.0\%$ in the SNIa data. In order to make the systematic error smaller than that of the Union2 data, one has to cut off the data with $z\geq0.398$. Here we employ another method to deal with this issue, rather than striking off high-$z$ SNIa data. First, we separate the data ($z<1.0$) equally in two redshift bins and the divide is $z_{cut}=0.274$, to guarantee that there are enough data in each bin. And then we employ Eqs. (\[eq:dlz\]) and (\[eq:dly\]) in each redshift bin, respectively. Note that for the $z>z_{cut}$ case, the luminosity distance can be expanded around $z=\frac{1}{2}$, which separates the data ($0.274<z<1$) equally in two parts. This method will reduce the systematic error and give a better constraint. Accordingly, the luminosity distance for $z>z_{cut}$ can be expressed as $$\begin{aligned}
d_L(z>z_{cut})&=&\frac{1}{H_0}[\frac{1}{24}(2-2q_0-15q_0^2-15q_0^3+10q_0j_0+5j_0+s_0)(z-\frac{1}{2})^4{\nonumber}\\
&&-\frac{1}{6}(1-q_0-3q_0^2+j_0-\frac{1}{2}(2-2q_0-15q_0^2-15q_0^3+10q_0j_0+5j_0+s_0)){\nonumber}\\
&&(z-\frac{1}{2})^3+(\frac{1}{16}(2-2q_0-15q_0^2-15q_0^3+10q_0j_0+5j_0+s_0)-\frac{1}{4}{\nonumber}\\
&&(1-q_0-3q_0^2+j_0)+\frac{1}{2}(1-q_0))(z-\frac{1}{2})^2+(1+\frac{1}{2}(1-q_0){\nonumber}\\
&&-\frac{1}{8}(1-q_0-3q_0^2+j_0)+\frac{1}{48}(2-2q_0-15q_0^2-15q_0^3+10q_0j_0+5j_0+s_0)){\nonumber}\\
&&(z-\frac{1}{2})]+d_L(\frac{1}{2})\,.
\label{eq:dlz2}\end{aligned}$$ Using the $y$-redshift, the divide is $y_{cut}=0.215$ and the luminosity distance for the $y>y_{cut}$ case can be expanded around the point $y=\frac{1}{3}$, $$\begin{aligned}
d_L(y>y_{cut})&=&\frac{1}{H_0}[\frac{1}{24}(50-26q_0+21q_0^2-15q_0^3+10q_0j_0-7j_0+s_0)(y-\frac{1}{3})^4{\nonumber}\\
&&+\frac{1}{6}(\frac{1}{3}(50-26q_0+21q_0^2-15q_0^3+10q_0j_0-7j_0+s_0){\nonumber}\\
&&+11-5q_0+3q_0^2-j_0)(y-\frac{1}{3})^3+\frac{1}{2}(\frac{1}{3}(11-5q_0+3q_0^2-j_0){\nonumber}\\
&&+\frac{1}{18}(50-26q_0+21q_0^2-15q_0^3+10q_0j_0-7j_0+s_0)+3-q_0)(y-\frac{1}{3})^2{\nonumber}\\
&&+(1+\frac{1}{3}(3-q_0)+\frac{1}{18}(11-5q_0+3q_0^2-j_0)+\frac{1}{162}{\nonumber}\\
&&(50-26q_0+21q_0^2-15q_0^3+10q_0j_0-7j_0+s_0)(y-\frac{1}{3})]+d_L(\frac{1}{3})\,.
\label{eq:dly2}\end{aligned}$$ The systematic error of Eq. (\[eq:dly2\]) is about 0.411%, which is comparable with the most accurately detected SNIa data, whose error is 0.212%.
The Hubble parameter $H(z)$ can also be expressed with $H_0, q_0,
j_0, s_0$ by expanding it to the third order. Since $q(z)$ is related to the second order derivative of $d_L(z)$, and the first order derivative of $H(z)$, this expansion is worthwhile to make a full analysis. Using the redshift and the $y$-redshift, the hubble parameter can be expanded as $$E(z)\equiv\frac{H(z)}{H_0}=1+(1+q_0)z+\frac{1}{2}(j_0-q_0^2)z^2+\frac{1}{6}(3q_0^2+3q_0^3-3j_0-4q_0j_0-s_0)z^3\,,$$ and $$E(y)\equiv\frac{H(y)}{H_0}=1+(1+q_0)y+\frac{1}{2}(2+2q_0+j_0-q_0^2)z^2+\frac{1}{6}(6+6q_0-3q_0^2+3q_0^3+3j_0
-4q_0j_0-s_0)y^3\,.$$ Similarly, in order to reduce the systematic error, we expand the Hubble parameter in the regime of redshift $z>z_{cut}$ as $$\begin{aligned}
\frac{H(z>z_{cut})}{H_0}&=&[1+q_0+\frac{1}{2}(j_0-q_0^2)+\frac{1}{8}(3q_0^2+3q_0^3-3j_0-4q_0j_0-s_0)](z-\frac{1}{2}){\nonumber}\\
&&+\frac{1}{2}[j_0-q_0^2+\frac{1}{2}(3q_0^2+3q_0^3-3j_0-4q_0j_0-s_0)](z-\frac{1}{2})^2{\nonumber}\\
&&+\frac{1}{6}(3q_0^2+3q_0^3-3j_0-4q_0j_0-s_0)(z-\frac{1}{2})^3+\frac{H(\frac{1}{2})}{H_0}\,,\end{aligned}$$ and $$\begin{aligned}
\frac{H(y>y_{cut})}{H_0}&=&[1+q_0+\frac{1}{3}(2+2q_0+j_0-q_0^2)+\frac{1}{18}(6+6q_0-3q_0^2+3q_0^3+3j_0-4q_0j_0{\nonumber}\\
&&-s_0)](y-\frac{1}{3})+\frac{1}{2}[2+2q_0+j_0-q_0^2+\frac{1}{3}(6+6q_0-3q_0^2+3q_0^3+3j_0-4q_0j_0{\nonumber}\\
&&-s_0)](y-\frac{1}{3})^2+\frac{1}{6}(6+6q_0-3q_0^2+3q_0^3+3j_0-4q_0j_0-s_0))(y-\frac{1}{3})^3{\nonumber}\\
&&+\frac{H(\frac{1}{3})}{H_0}\,.\end{aligned}$$
Datasets
--------
The dataset we use is the most recently released Union2 SNIa dataset [@Union2], which contains 537 events for $z<1.0$. We fit the SNIa by minimizing the $\chi^2$ value of the distance modulus and the $\chi_{sn}^{2}$ for SnIa is obtained by comparing theoretical distance modulus $\mu_{th}(z)=5\log_{10}[d_L(z)]+\mu_{0}$, where $\mu_{0}=42.384-5\log_{10}h$, with observed $\mu_{ob}$ of supernovae:
$$\chi_{sn}^{2}=\sum_{i}^{537}\frac{[\mu_{th}(z_{i})-\mu_{ob}(z_{i})]^{2}}{\sigma^{2}(z_{i})}$$ To reduce the effect of $\mu_{0}$, we expand $\chi_{sn}^{2}$ with respect to $\mu_{0}$ [@Nesseris:2005ur] : $$\chi_{sn}^{2}=A+2B\mu_{0}+C\mu_{0}^{2}\label{eq:expand}$$ where $$\begin{aligned}
A & = & \sum_{i}\frac{[\mu_{th}(z_{i};\mu_{0}=0)-\mu_{ob}(z_{i})]^{2}}{\sigma^{2}(z_{i})},\\
B & = & \sum_{i}\frac{\mu_{th}(z_{i};\mu_{0}=0)-\mu_{ob}(z_{i})}{\sigma^{2}(z_{i})},\\
C & = & \sum_{i}\frac{1}{\sigma^{2}(z_{i})}.\end{aligned}$$ Eq. (\[eq:expand\]) has a minimum as $$\widetilde{\chi}_{sn}^{2}=\chi_{sn,min}^{2}=A-B^{2}/C$$ which is independent of $\mu_{0}$. In fact, it is equivalent to performing an uniform marginalization over $\mu_{0}$, the difference between $\widetilde{\chi}_{sn}^{2}$ and the marginalized $\chi_{sn}^{2}$ is just a constant [@Nesseris:2005ur]. We will adopt $\widetilde{\chi}_{sn}^{2}$ as the goodness of fit between theoretical model and SNIa data.
In order to discriminate the evolution behaviors of different models, another dataset is also employed, which is the recently summarized Hubble evolution data [@Hubble], containing 7 data at redshift $z<1.0$. We add in three more data at $z=0.24, 0.34, 0.43$ by taking the BAO scale as a standard ruler in the radial direction [@Hubble2], and also $H(z=0)=74.2\pm3.6(km/(s\cdot
Mpc))$ observed by the Hubble Space Telescope (HST) [@HST]. The $\chi_{H}^{2}$ is defined as
$$\chi_{H}^{2}=\sum_{i=1}^{11}\frac{[H(z_{i})-H_{ob}(z_{i})]^{2}}{\sigma_{i}^{2}}.$$
To make use of the data, we perform a uniform marginalization over the nuisance parameter $H_0$ and get
$$\widetilde{\chi}_{H}^{2}=\chi_{H,min}^{2}=a-c^{2}/b,$$ where $$\begin{aligned}
a & = & \sum_{i}\frac{H_{ob}(z_{i})^{2}}{\sigma^{2}(z_{i})},\\
b & = & \sum_{i}\frac{[H_{th}(z_{i})/H_0]^{2}}{\sigma^{2}(z_{i})},\\
c & = &
\sum_{i}\frac{H_{ob}(z_{i})H_{th}(z_{i})/H_0}{\sigma^{2}(z_{i})}.\end{aligned}$$
Results
-------
The analysis is performed by using the Monte Carlo Markov Chain in the multidimensional parameter space to derive the likelihood. It is natural to employ some physically obvious limitations to make the estimation of parameters more robust [@latest]. We impose the positiveness conditions of the Hubble parameter and the luminosity distance when we move randomly in the parameter space, and select the points satisfying these conditions as our chains. Note that these conditions are independent of the content of the universe and the requirement of matter density fraction, In this sense they are robust and the priors are: $$\begin{aligned}
d_{L}(z)>0,\\
H(z)>0.\end{aligned}$$
We first investigate the constraint on the model parameters using the SNIa data, and add the Hubble evolution data for a comparison, then we analyze the evolution behavior of the deceleration parameters. The best-fitting values and errors of parameters for $d_L(z)$, $d_L(y)$ and $d_L^{(H)}(z)$, $d_L^{(H)}(y)$ (where the upper index $H$ denote the case with Hubble evolution data added) are summarized in Table \[tab:fit\_result\], in company with the best-fitting values of $\Lambda CDM$ parameters. For the $\Lambda
CDM$ model, we use the same dataset, and the best-fitting value of current matter density fraction parameter $\Omega_{m0}=0.268$, thus, $q_0=\frac{3}{2}\Omega_{m0}-1=-0.598$. The marginalized likelihood distribution of $q_0$ is also shown in Figure \[fig:likelihood\] and \[fig:likelihoodh\], respectively.
If we use the SNIa data only, we find that the best-fitting values of the deceleration parameter are both negative, which are larger than that of the $\Lambda CDM$ model, and we can not exclude the case of $q_0>0$ even within $1\sigma$ error. What’s more, the present value of the acceleration rate shows a slight tendency of decrease for the case of $y$-redshift expansion, compared with the case of redshift expansion. Note that even though the $y$-redshift expansion is systematically more accurate than the redshift expansion, but the figure of merit (FoM) of $q_0$, $j_0$ and $s_0$ is smaller, it is 1.10 for redshift expansion and 0.02 for $y$-redhift expansion. Compared to the redshift expansion, the $y$-redshift expansion gives a weaker constraint on the snap parameter. Recently, Xia et al. [@xia] investigated the fifth order parameter $c_0$ using the similar cosmographic approach and found that the constraint on it is much weaker.
With the Hubble evolution data added, the constraint ability is stronger, and the FoM for redshift and $y$-redshift expansion are 3.07 and 0.52, respectively. It worths noting that by employing the FoM to compare the constraint ability of the two methods, one often assumes that the parameters’ log-likelihood surface can be quadratically approximated around the maximum, and the likelihood for the data is Gaussian. So, it may significantly mis-estimate the size of the errors if these assumptions do not hold. On the other hand, the Bayesian Evidence, $E$, provides us with a good method to compare different models [@Jeffreys], which is not dependent on those assumptions. And the ratio of Evidences for two models $B_{12}\equiv E(M_1)/E(M_2)$, also known as the Bayes factor, provides a measure with which to discriminate the models. We calculate the Bayesian Evidence ratio of the redshift and $y$-redshift expansion methods, and find that for the case of SNIa dada used only, $\ln B=-0.83$, while for the case with Hubble data added, $\ln B=-0.37$. It looks that the $y-$redshift expansion is a bit better. According to the Jeffreys grades, however, the result means that these two expansion methods are comparable in fitting the Union2 dataset. We see from the results that the redshift expansion approach excludes the case $q_0>0$ at about 95.4% confidence level, which tells us that the present universe is still in the stage of accelerating expansion. The constraint on the snap parameter is much tighter in the case of redshift expansion, and the constraint ability is improved in some degree with our method compared with some existing studies [@cosmographic; @xia].
parameter $q_0$ $j_0$ $s_0$ $\chi^2$
---------------- ---------------------------------------------- -------------------------------------------------- ---------------------------------------------------- -----------
$d_L(z)$ $-0.357^{-0.533,\,-0.58}_{+0.557,\,+0.57}$ $-1.826^{-4.792,\,-5.293}_{+5.848,\,+5.975}$ $-1.878^{-9.468,\,-13.905}_{+20.484,\,+21.703}$ $524.117$
$d_L(y)$ $-0.150_{+0.915,\,+1.456}^{-0.560,\,-0.795}$ $-6.564_{+11.119,\,+14.520}^{-21.400,\,-26.131}$ $-51.042_{+93.324,\,+125.622}^{-79.602,\,-90.412}$ $524.086$
$d_L^{(H)}(z)$ $-0.403_{+0.318,\,+0.352}^{-0.338,\,-0.431}$ $-1.230_{+3.218,\,+3.676}^{-3.468,\,-4.374}$ $-2.393_{+6.926,\,+7.891}^{-4.631,\,-9.281}$ $527.044$
$d_L^{(H)}(y)$ $-0.255_{+0.269,\,+0.460}^{-0.248,\,-0.445}$ $-7.631_{+5.110,\,+8.528}^{-6.185,\,-8.842}$ $-65.741_{+35.462,\,+72.343}^{-51.988,\,-80.440}$ $528.844$
$\Lambda CDM$ $-0.598$ $1$ $-0.206$ $525.765$
: \[tab:fit\_result\]The best-fitting values with $1\sigma$ and $2\sigma$ errors of $q_0, j_0$ and $ s_0$. And the best-fitting values of $\Lambda CDM$ model fitted by the same SNIa data.
![\[fig:likelihood\]1D marginalized distribution probability for the present deceleration parameter from the SNIa data. Curves are scaled to $max[P(q_0)]=1.$](likelihood.pdf){width="65.00000%" height="45.00000%"}
![\[fig:likelihoodh\]1D marginalized distribution probability for the present deceleration parameter from the SNIa data and the Hubble evolution data. Curves are scaled to $max[P(q_0)]=1.$](likelihoodh.pdf){width="65.00000%" height="45.00000%"}
It is of interest to investigate the evolution behavior of the deceleration parameter to compare with the $\Lambda CDM$ model, since different behaviors may tell something on the universe. And another change in the sign of the cosmic acceleration is also interesting to us, if any. Using the best-fitting parameters, the deceleration parameter can be expanded with the redshift and $y$-redshift, respectively, as $$q(z)=q_0-(q_0+2q_0^2-j_0)z+\frac{1}{2}(2q_0+8q_0^2+8q_0^3-7q_0j_0-4j_0-s_0)z^2,$$ and $$q(y)=q_0-(q_0+2q_0^2-j_0)y+\frac{1}{2}(4q_0+8q_0^3-7q_0j_0-2j_0-s_0)y^2.$$ We note that the quadratic form is essential for the detection of the transient acceleration phase, since in this scenario, $q$ should be positive in the past ($z>2$) and changes its sign at moderate redshift ($z_t\sim0.3-1$), then steps into the phase of $q>0$. We plot the evolution behaviors of $q$ compared with the $\Lambda CDM$ model in Figure \[fig:q\]. Because of the weak constraint of the $d_{L}(y)$, the $1\sigma$ area is too large to give any conclusive information, therefore we do not show it here.
![\[fig:q\]Evolution behaviors of the deceleration parameter. The red one is the best-fitting curve of $q(z)$ and the green line is the evolution of $q(z)$ derived from the $\Lambda CDM$ model. The blue area delimits the 68.3% confidence region for the $q(z)$ reconstruction.](q.pdf){width="80.00000%" height="100.00000%"}
The best-fitting curves in the figures show a remarkably different evolution behaviors of the deceleration parameter from the $\Lambda
CDM$ model, all of which indicate a transient acceleration. The $\Lambda CDM$ model predicts that the universe began to accelerate at $z=0.35$, but in our analysis, the redshift expansion predicts a earlier transition and the $y$-redshift expansion predicts the similar transition time as the $\Lambda CDM$ model. However, all cases indicate a transient acceleration phase and predict a decelerated stage in the near future $z<0$. This picture is even clear in the case of the $y$-redshift expansion if both datasets are used, which shows a considerably positive deceleration probability today and in the future, at 68.3% confidence level, and it indicates that the $\Lambda CDM$ model is completely excluded in the future. The behavior of transient acceleration is also predicted or allowed by several dynamic models [@slowing; @model]. Due to large uncertainties at high redshifts and in the future, however, it is still not able to confirm the existence of transient acceleration phase, nor the future decelerated acceleration, even at $1\sigma$ confidence level.
Next, we plot the curves of $E(z)$ for the four cosmographic models and the standard $\Lambda CDM$ model using the best-fitting parameters, compared with the Hubble evolution data in Figure \[fig:hubble\]. We see from the figure that the curves are all nicely consistent with the data, including the $\Lambda CDM$ model. The two curves including the Hubble evolution data are much close to the one of the $\Lambda CDM$ model than the other two. Note that 6 of the 11 Hubble evolution data have large error bars, with $\sigma(obs)>10\%$, and the worst data is $\sigma(obs)=61.86\%$. So, better measured data are in urgent need to make a definite discrimination among different behaviors of our universe.
![\[fig:hubble\]Best-fitting curves of $E(z)$ derived from the four cosmographic approaches. The red (or dashed red) line denotes the redshift (or $y$-redshift) expansion using SNIa data only. The green (or dashed green) line denotes the redshift (or $y$-redshift) expansion constrained with the Hubble data involved. The black one is the evolution of $E(z)$ derived from the $\Lambda CDM$ model.](hubble.pdf){width="65.00000%" height="45.00000%"}
Non-parametrization approach {#sec3}
============================
Further exploration on the deceleration parameter is also approached with a non-parametrization method, by reconstructing the deceleration parameter from the distance modulus of SNIa directly, which depends neither on the validity of general relativity nor the content of the universe or any assumption regarding cosmological parameters. From mathematics point of view, the determination of the deceleration parameter is much difficult than the evolution of Hubble parameter, since the Hubble parameter is the first order derivative of the comoving distance, while the deceleration parameter is related to the second order derivative of the comoving distance. And because of that, the error for constructing the deceleration parameter is also larger. To deal with this issue, Ref. [@Daly] generated $2000$ artificial data sets mimicking what is expected from SNIa measurement by the SNAP satellite (see [@SNAP] or the web site [^3]) and then applied the quadratic expansion to express the dimensionless coordinate distance $y(z)=H_0r(z)$ in each redshift bin, thus the deceleration parameters can be obtained.
There are two sources of errors we try to tackle in the implementation. First, the deviation from the true value of any individual data is assumed to be normally distributed, which is a good approximation only for relatively large samples. Second, any finite dataset, especially small dataset, will contain large sample variance. In summary, smaller number of data produces noisier results, while larger number of data leads to more convictive results at the expense of resolution in larger redshift bins. We try to balance these effects by choosing relatively large number of SNIa data in each redshift bin to make sure that there are both enough data and not too large bins, and also to reduce the effect of sample variance.
In our analysis, we focus on the Union2 dataset instead of the mock datasets and separate the 557 SNIa data equally into four redshift bins, each containing 139 data points (149 data points in the forth bin). In each redshift bin, we reconstruct one data point using the data within each bin, so we get totally 4 reconstructed data, as shown in Table \[tab:mu\], where the first column is the averaged redshift in each bin and the third column is the unbiased estimate of the error. We will not take use of the expression of the distance modulus of SNIa, which we believe contains some uncertainties in determining the calibration of the data. Instead, we apply the following expression,
$$\mu _{i+1}-\mu_{i}=5\lg\frac{d_L(z_{i+1})}{d_L(z_{i+1})},$$
where $i$ runs from 1 to 3 in this case. This expression is credible if the distance modulus is of logarithm form of luminous distance only. In order to detect the deceleration parameter and make it easier for numerical approach, we expand the comoving distance to the second order of redshift in each redshift bin $z_i<z<z_{i+1}$,
$$r(z_{i+1})=r(z_i)+\frac{1}{H(z_i)}[z_{i+1}-z_i-\frac{1+q_i}{2(1+z_i)}(z_{i+1}-z_i)^2],$$
where the Hubble parameter can also be expressed with the deceleration parameter, $$H(z_{i+1})=H(z_i)[1+\frac{1+q_i}{1+z_i}(z_{i+1}-z_i)].$$ And then the luminosity distance can be obtained using the relation $d_L(z)=r(z)(1+z)$,
$$d_L(z_{i+1})=\frac{1+z_{i+1}}{H(z_i)}[z_{i+1}-z_i-\frac{1+q_i}{2(1+z_i)}(z_{i+1}-z_i)^2]+
\frac{1+z_{i+1}}{1+z_i}d_L(z_i).$$
For the beginning of the iteration process, we choose $q_0\approx
q_1$. In the following, we will see that this approximation is reasonable since the redshift distance between them is small, with $\delta z=0.089$. And we use the best-fitting value of Hubble parameter given by the $WMAP7$ group, $h_0=0.704(km/(s\cdot
Mpc)$ [@CMB]. Then we get 3 deceleration parameters, shown in Table \[tab:q\], where we choose the average redshift as the first column. Since $q(z)$ involves the second order derivative of the distance modulus, it is much harder to make the constraint accurately, thus causes the error bars blowing up to be proportional to $(\delta z)^{-5/2}$ [@error; @Tegmark], similar to the equation of state of dark energy $w(z)$. We plot the evolution of the reconstructed deceleration parameters in Figure \[fig:qplot\]. As a comparison, other four best-fitting curves of the cosmographic approach are also shown. It is clear that except the $\Lambda CDM$ model, all other results contain a transient acceleration solution, though the present value of the deceleration parameter is still negative. We cannot discriminate their behaviors at low redshift ($z<0.4$), but higher redshift behaviors are distinct ($z>0.6$), thus more accurately detected high-$z$ data are in urgent need. On the other hand, we still need a well behaved method to affirm if there is indeed a transient acceleration phase.
$z$ $\mu$ $\sigma_\mu$
----------- ----------- --------------
$0.10286$ $35.3573$ $0.1867$
$0.17760$ $39.4708$ $0.1616$
$0.40451$ $41.7502$ $0.2926$
$0.78596$ $43.4330$ $0.3439$
: \[tab:mu\]Reconstructed distance modulus of Union2 SNIa data, where the first column is the averaged redshift in each bin and the third column is the unbiased estimate of the error.
$z$ $q(z)$ $\sigma_{+}$ $\sigma_{-}$
--------- ----------- -------------- --------------
$0.089$ $-0.0834$ $+2.0285$ $-2.2051$
$0.291$ $-2.7717$ $+3.1068$ $-1.7184$
$0.595$ $2.2651$ $+1.3969$ $-0.9106$
: \[tab:q\]Reconstructed deceleration parameters of Union2 SNIa data, where the last two columns are the estimates of the errors in each redshift bin, which are also directly constructed from the SNIa data by considering the full error of each data point.
![\[fig:qplot\]Evolution of the deceleration parameter reconstructed from the Union2 SNIa data with $1\sigma$ error bars. Others are the best-fitting curves of the cosmographic approach as before, in company with that of the $\Lambda CDM$ model.](qz.pdf){width="65.00000%" height="45.00000%"}
CONCLUSION {#sec4}
==========
To summarize, we have investigate the deceleration parameter in detail, both with the cosmographic approach and the non-parametrization method. These approaches depend neither on the validity of general relativity nor the content of the universe. To make the results robust, we focus on reducing the systematic error throughout the paper.
Cosmographic approach to detect the deceleration parameter is to expand the luminosity distance to the fourth order or even higher order of redshift or the so called $y$-redshift, and then to fit the datasets. But this method contains large uncertainties if we use the high-$z$ data. Some authors cut off high-$z$ data in data fitting process to avoid the problem, but we used other technique to reduce the systematic error: By expanding the luminosity distance in two redshift bins which contain the same number of SNIa data, the error can be reduced. Since the deceleration parameter is related to the second order derivative of the luminosity distance, and the first order derivative of the Hubble parameter, it is also necessary to expand the Hubble parameter to the third order as what we have done for the luminosity distance, thus improving the constraint ability. The result reveals that the universe may transit from decelerating expansion to a transient accelerating expansion during $0.3<z<1.0$, and the best-fitting evolution behaviors tell us that it is possible that the universe will return to decelerating expansion stage in the future even though it is still accelerating at present. Comparing with the Hubble data shows that these results are indistinguishable with that of the $\Lambda CDM$ model, which demands for more accurate observations to come up with a conclusive verdict.
Non-parametrization method is also implemented by reconstructing the deceleration parameters directly from the SNIa data. We simply use the logarithmic form of the distance modulus with respect to the luminosity distance, thus avoid the uncertainties in calibration of the data. To reduce the systematic error, we distribute the same number of SNIa data in each redshift bin to make sure of enough data and not too large width of redshift bin. The result also indicates a transient accelerating expansion stage of our universe. Since the deceleration parameter is much more sensitive to the accuracy of the SNIa data than the Hubble parameter, it is not a easy job to constrain the result tightly, thus needs further improvement, both in observation data and fitting methods.
ACKNOWLEDGEMENTS {#acknowledgements .unnumbered}
================
ZLT is really grateful to Bin Hu, Qiping Su and Hongbo Zhang for their useful suggestions. RGC thanks the organizers and participants for various discussions during the workshop “Dark Energy and Fundamental Theory" supported by the Special Fund for Theoretical Physics from the National Natural Science Foundation of China with grant No. 10947203. This work was supported in part by the National Natural Science Foundation of China (No. 10821504, No. 10975168, No.11035008 and No.11075098), the Ministry of Science and Technology of China under Grant No. 2010CB833004, and a grant from the Chinese Academy of Sciences.
[99]{} A. G. Riess *et al.* [\[]{}Supernova Search Team Collaboration[\]]{}, Astron. J. **116**, 1009 (1998) [\[]{}arXiv:astro-ph/9805201[\]]{}; S. Perlmutter *et al.* [\[]{}Supernova Cosmology Project Collaboration[\]]{}, Astrophys. J. **517**, 565 (1999) [\[]{}arXiv:astro-ph/9812133[\]]{}. D. J. Eisenstein [*et al.*]{} \[SDSS Collaboration\], Astrophys. J. [**633**]{}, 560 (2005) \[arXiv:astro-ph/0501171\]; U. Seljak [*et al.*]{} \[SDSS Collaboration\], Phys. Rev. D [**71**]{}, 103515 (2005) \[arXiv:astro-ph/0407372\]. E. Komatsu [*et al.*]{} \[WMAP Collaboration\], Astrophys. J. Suppl. [**192**]{}, 18 (2011) \[arXiv:1001.4538 \[astro-ph.CO\]\]. D. N. Spergel [*et al.*]{} \[ WMAP Collaboration \], Astrophys. J. Suppl. [**170**]{}, 377 (2007). \[astro-ph/0603449\]. V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D [**9**]{}, 373 (2000) \[arXiv:astro-ph/9904398\];
S. M. Carroll, Living Rev. Rel. [**4**]{}, 1 (2001) \[arXiv:astro-ph/0004075\]; P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. [**75**]{}, 559 (2003); T. Padmanabhan, Phys. Rept. [**380**]{}, 235 (2003); E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D [**15**]{}, 1753 (2006); M. Li, X. D. Li, S. Wang and Y. Wang, Commun. Theor. Phys. [**56**]{} (2011) 525 \[arXiv:1103.5870 \[astro-ph.CO\]\].
Y. Wang and M. Tegmark, Phys. Rev. D [**71**]{}, 103513 (2005) \[arXiv:astro-ph/0501351\].
R. A. Daly and S. G. Djorgovski, Astrophys. J. [**597**]{}, 9 (2003) \[arXiv:astro-ph/0305197\]; R. A. Daly and S. G. Djorgovski, Astrophys. J. [**612**]{}, 652 (2004) \[arXiv:astro-ph/0403664\]. J. Barrow, R. Bean and J. Magueijo, Mon. Not. Roy. Astron. Soc. [**316**]{}, L41 (2000) \[arXiv:astro-ph/0004321\].
M. Chevallier and D. Polarski, Int. J. Mod. Phys. D [**10**]{}, 213 (2001) \[arXiv:gr-qc/0009008\]. R. R. Caldwell and E. V. Linder, Phys. Rev. Lett. [**95**]{}, 141301 (2005) \[arXiv:astro-ph/0505494\]. A. Shafieloo, V. Sahni and A. A. Starobinsky, Phys. Rev. D [**80**]{}, 101301 (2009) \[arXiv:0903.5141 \[astro-ph.CO\]\].
Y. G. Gong, B. Wang, R. G. Cai, J. Cosmol. Astropart. Phys. [**04**]{}, 019 (2010); Z. Li, P. Wu and H. Yu, Phys. Lett. B [**695**]{}, 1 (2011) \[arXiv:1011.1982 \[gr-qc\]\].
S. Nesseris and L. Perivolaropoulos, JCAP [**0702**]{}, 025 (2007) \[arXiv:astro-ph/0612653\]. M. V. John, Astrophys. J. [**614**]{}, 1 (2004); M. Visser, General Relativity and Gravitation [**37**]{}, 1541 (2005); C. Cattoen, M. Visser, Phys. Rev. D [**78**]{}, 063501 (2008); M. Visser, Class. Quant. Grav. [**21**]{}, 2603 (2004); M. V. John, Astrophys. Space Sci. [**330**]{}, 7 (2010).
A. C. C. Guimaraes, J. V. Cunha and J. A. S. Lima, JCAP [**0910**]{}, 010 (2009) \[arXiv:0904.3550 \[astro-ph.CO\]\];
J. V. Cunha, Phys. Rev. D [**79**]{}, 047301 (2009); Y. Gong, A. Wang, Phys. Rev. D [**75**]{}, 043520 (2007).
A. C. C. Guimaraes and J. A. S. Lima, Class. Quant. Grav. [**28**]{}, 125026 (2011) \[arXiv:1005.2986 \[astro-ph.CO\]\].
P. Wu and H. W. Yu, arXiv:1012.3032 \[astro-ph.CO\]. C. Cattoen and M. Visser, Class. Quant. Grav. [**24**]{}, 5985 (2007) \[arXiv:0710.1887 \[gr-qc\]\].
R. Amanullah [*et al.*]{}, Astrophys. J. [**716**]{}, 712 (2010) \[arXiv:1004.1711 \[astro-ph.CO\]\]. C. Cattoën, M. Visser, Classical and Quantum Gravity [**24**]{}, 5985 (2007). S. Capozziello, R. Lazkoz and V. Salzano, arXiv:1104.3096 \[astro-ph.CO\]. D. Stern, [*et al.*]{}, J. Cosmol. Astropart. Phys. [**02**]{}, 008 (2010). E. Gaztanaga, A. Cabre and L. Hui, Mon. Not. Roy. Astron. Soc. [**399**]{}, 1663 (2009) \[arXiv:0807.3551 \[astro-ph\]\].
S. Weinberg, Cosmology and Gravitation, John Wiley Sons, New York (1972). V. Vitagliano, J. Q. Xia, S. Liberati, M. Viel, JCAP [**3**]{}, 5 (2010); J. Q. Xia, V. Vitagliano, S. Liberati and M. Viel, arXiv:1103.0378. A. G. Riess, [*et al.*]{}, Astrophys. J. [**699**]{}, 539 (2009). Jeffreys H., Theory of Probability, Oxford University Press, Oxford (1961). S. Nesseris and L. Perivolaropoulos, Phys. Rev. D [**72**]{}, 123519 (2005)
G. Aldering [*et al.*]{} \[SNAP Collaboration\], arXiv:astro-ph/0209550. F. C. Carvalho, J. S. Alcaniz, J. A. S. Lima and R. Silva, Phys. Rev. Lett. [**97**]{}, 081301 (2006) \[arXiv:astro-ph/0608439\]. M. Tegmark, Phys. Rev. D, [**66**]{}, 103507 (2002). M. Tegmark, D. J. Eisenstein, W. Hu, R. G. Kron, \[astro-ph/9805117\].
[^1]: Email: cairg@itp.ac.cn
[^2]: Email: tuozhl@itp.ac.cn
[^3]: http://snap.lbl.gov
|
---
abstract: 'The dynamics of the one-dimensional spin-1/2 quantum XXZ model with random fields is investigated by the recurrence relations method. When the fields satisfy the bimodal distribution, the system shows a crossover between a collective-mode behavior and a central-peak one with increasing field while the anisotropy parameter $\Delta $ is small; a disordered behavior replaces the crossover as $\Delta $ increases. For the cases of Gaussian and double-Gaussian distributions, when the standard deviation is small, the results are similar to that of the bimodal distribution; while the standard deviation is large enough, the system only shows a disordered behavior regardless of $\Delta $.'
author:
- 'Yin-Yang Shen$^{a}$ Xiao-Juan Yuan$^{a,b}$'
- 'Xiang-Mu Kong$^{a}$'
title: 'Effects of random fields on the dynamics of the one-dimensional quantum XXZ model'
---
INTRODUCTION
============
Quantum spin systems are of considerable interest for the reasons that they provide a ground for studying quantum many-particle phenomena, and offer the possibility to compare the theoretical and experimental results, etc. The dynamical properties of the quantum spin systems have received much attention and a variety of achievements have been got over the years. Among the systems studied, the one-dimensional (1-D) spin-1/2 XXZ chain is one of the nontrivial models and has been used to describe several quasi-1-D compounds such as CsCoBr$_{\text{3}}$ [@3; @3aa; @5], CsCoCl$_{\text{3}}$ [@4; @5], Cs$_{\text{2}}$CoCl$_{\text{4}}$ [@1; @2] and TiCoCl$_{\text{3}%
}$ [@6]. Concerning the dynamics of this model, the spin correlation function has been studied by using approximation and numerical methods [A,B,C,D,E,F,G,Bohm,Lee,Lee2]{}. Strong numerical evidence was found for a change of the bulk-spin autocorrelation function at $T=\infty $ from Gaussian decay to exponential decay for the anisotropy parameter $\Delta $ increasing from $0,$ and from exponential decay to power-law decay as $%
\Delta $ approaches 1, then replaced by a more rapid decay upon further increase of $\Delta $ [@Bohm]. The dynamics of the equivalent-neighbor XXZ model was studied in much detail at $T=0$ and $T=\infty $ using different calculational techniques, and was found the same long-time asymptotic behavior for the correlation function [@Lee; @Lee2].
More efforts have been concentrated on the dynamics of random quantum spin systems in the past decades which can be used for describing more real materials such as ferroelectric crystals [@yingyong1; @yy1; @yy2; @yy3] and spin glasses [@yingyong2]. Florencio and Barreto* *have* * studied the random transverse Ising model and obtained that the system undergoes a crossover between a collective-mode behavior and a central-peak one while the exchange couplings or external fields satisfy the bimodal distribution [@J.; @Florencio]. Later, the dynamics of the four-body transverse Ising model has been investigated for the cases of bond and field randomness [@four], following which, a series of spin systems such as the XY model with zero magnetic, the transverse Ising model with Gaussian distribution and two-dimensional transverse Ising model etc. have also been studied [@xy1; @xy2; @Z.-Q.; @Liu; @Zhen-Bo; @Xu; @chen]. A recent study shows that the next-nearest-neighbor interaction has a strong influence on the dynamics of the Ising system [@yuan]. For the random XXZ model, the spin correlations have been investigated by using exact diagonalization [antiferromagnetic]{}, the real space renormalization group method [@T; @infinite; @random1; @random2] and a finite-chain study [@random3; @Heinrich; @Roder] etc. Infinite temperature spin-spin correlation function has been found to display exponential localization in space indicating insulating behavior for large enough random fields [@T; @infinite]. The transverse correlation function at $T=0$ has been found to exhibit a power-law decay to exponential decay depending on the exchange disorder [@Heinrich; @Roder].
In this paper, we investigate the effects of the random fields on the time evolution of the quantum XXZ model in the high-temperature limit. We find that the system with the random fields that satisfy the bimodal distribution undergoes a crossover between a central-peak behavior and a collective-mode one with increasing field when the anisotropy parameter $%
\Delta $ is small (e.g., $\Delta =0.01$), but the collective-mode behavior vanishes as $\Delta $ approaches 0.4, then when $\Delta $ increases to $1.0$, the central-peak behavior vanishes and the system just shows a disordered behavior.
The arrangement of this paper is organized as follows. In Sec. \[come on\] we give a simple introduction to the 1-D quantum XXZ model and the recurrence relations method. In Sec. \[A ZA\] we discuss the results, and Sec. \[jiayou\] contains a summary.
MODEL AND METHOD\[come on\]
===========================
The Hamiltonian of the 1-D quantum spin-1/2 XXZ model with external fields can be written as$$H=-\frac{J}{2}\sum_{i}\left[ \left( \sigma _{i}^{x}\sigma _{i+1}^{x}+\sigma
_{i}^{y}\sigma _{i+1}^{y}\right) +\Delta \sigma _{i}^{z}\sigma _{i+1}^{z}%
\right] -\frac{1}{2}\sum_{i}B_{i}\sigma _{i}^{z}, \label{1}$$where $\sigma _{i}^{\alpha }$ $\left( \alpha =x,y,z\right) $ are Pauli spin operators, $J$ and $\Delta $ are the exchange coupling and the anisotropy parameter, respectively. $B_{i}$ denote the external fields, which may be regarded as random variables. Clearly, this Hamiltonian can describe two special cases: the Ising model for which $\Delta =\infty $ and the isotropy XY model where $\Delta =0$.
The spin autocorrelation function plays an important part in the study of the dynamics of quantum spin systems. It is defined as$$C\left( t\right) =\overline{\left\langle \sigma _{j}^{x}\left( t\right)
\sigma _{j}^{x}\left( 0\right) \right\rangle } \label{2}$$where $\overline{\langle \cdots \rangle }$ denotes an ensemble average followed by an average over the random variable. The corresponding spectral density which is the Fourier transform of $C\left( t\right) $ can be expressed as $$\Phi \left( \omega \right) =\int_{-\infty }^{+\infty }e^{i\omega t}C\left(
t\right) dt, \label{3a}$$and for the mathematical* *simplicity, $\Phi \left( \omega \right) $ is able to be obtained as$$\Phi \left( \omega \right) =\underset{\varepsilon \rightarrow 0}{\lim }\left[
\text{Re}\int_{0}^{\infty }dtC\left( t\right) e^{-zt}\right] , \label{3b}$$where $z=\varepsilon +i\omega ,$ $\varepsilon >0.$
The recurrence relations method has been proved to be very powerful in the calculation of dynamic correlation function [@chen; @Lee; @fangfa; @Zhen-Bo; @Xu; @Z.-Q.; @Liu; @four; @J.; @Florencio; @F; @yuan]. Next, we will give a brief introduction to this method.
Considering a Hermitian operator $\sigma _{j}^{x}\left( t\right) $ as a dynamical variable and then expanding it with an orthogonal set in a Hilbert space,$$\sigma _{j}^{x}\left( t\right) =\sum_{\nu =0}^{\infty }a_{\nu }\left(
t\right) f_{\nu }, \label{4}$$where $a_{\nu }\left( t\right) $ are the time-dependent coefficients. There exists the following set of recurrence relation for the basis vectors $%
f_{\nu },$ $$f_{\nu +1}=iLf_{\nu }+\Delta _{\nu }f_{\nu -1},\text{ \ }\nu \geq 0,
\label{5a}$$$$\Delta _{\nu }=\frac{\left( f_{\nu },f_{\nu }\right) }{\left( f_{\nu
-1},f_{\nu -1}\right) },\text{ \ }\nu \geq 1, \label{5b}$$where $L\equiv \left[ H,\text{ }\right] $ is the quantum Liouvillian operator, $\left( f_{\nu },f_{\nu }\right) $ $=\overline{\left\langle f_{\nu
}f_{\nu }^{\dagger }\right\rangle },$ $f_{-1}\equiv 0$ and $\Delta
_{0}\equiv 1.$ The coefficients $a_{\nu }\left( t\right) $ in Eq. $\left( %
\ref{4}\right) $ satisfy the relation:$$\Delta _{\nu +1}a_{\nu +1}\left( t\right) =-\dot{a}_{\nu }\left( t\right)
+a_{\nu -1}\left( t\right) ,\text{ \ }\nu \geq 0, \label{6}$$where $\dot{a}_{\nu }\left( t\right) =\dfrac{da_{\nu }\left( z\right) }{dt},$ $a_{-1}\left( t\right) \equiv 0.$
The spin autocorrelation function $C\left( t\right) $ can be expressed as the form of moment expansion$$C(t)=\sum_{k=0}^{\infty }\frac{\left( -1\right) ^{k}}{\left( 2k\right) !}\mu
^{2k}t^{2k}, \label{C}$$with$$\mu ^{2k}=\frac{1}{Z}\overline{\text{Tr}\sigma _{j}^{x}\left[ H,\left[
H,\cdots \left[ H,\sigma _{j}^{x}\right] \cdots \right] \right] }\text{,}
\label{ju}$$where $\mu ^{2k}$ is the $2k$th moment of $C\left( t\right) $. Supposing that the first Q moments have been calculated by Eq. (\[ju\]), we can obtain $C\left( t\right) $ by constructing the padé approximate. Because of the mathematical complexity, a finite number of moments can be got, the large times of $C\left( t\right) $ are divergent, thus we just can discuss the short times of $C\left( t\right) $.
By taking the inner product for $\sigma _{j}^{x}\left( t\right) $ and $%
\sigma _{j}^{x}\left( 0\right) $, from Eq. (\[4\]), we can find that $%
a_{0}\left( t\right) $ is the spin autocorrelation function$$a_{0}\left( t\right) =\overline{\left\langle \sigma _{j}^{x}\left( t\right)
\sigma _{j}^{x}\left( 0\right) \right\rangle }=C\left( t\right) .$$Applying the Laplace transform $a_{\nu }(z)=\int_{0}^{\infty }e^{-zt}a_{\nu
}\left( t\right) dt$ $\left( z=\varepsilon +i\omega ,\varepsilon >0\right)
\ $to Eq.$\left( \ref{6}\right) ,$ $a_{0}\left( z\right) $ in the continued-fraction representation can be obtained as $$a_{0}\left( z\right) =\frac{1}{z+\dfrac{\Delta _{1}}{z+\dfrac{\Delta _{2}}{%
z+\cdot \cdot \cdot }}}. \label{7}$$Because we can only get a finite number of recurrants, it is necessary to terminate it with some schemes. Here, we use the Gaussian terminator [Gaussian,F]{} Which can best serve our problem. With the help of the Padé approximate and the Gaussian terminator, we can obtain the spin autocorrelation function $C\left( t\right) $ and the corresponding spectral density $\Phi \left( \omega \right) $ of the system, respectively.
RESULTS AND DISCUSSIONS\[A ZA\]
===============================
In order to investigate the spin autocorrelation function $C\left( t\right) $ for $\sigma _{j}^{x}$, we choose the zeroth basis vector $f_{0}$ $=\sigma
_{j}^{x}$. With the recurrence relation Eq. $\left( \ref{5a}\right) $, the remaining basis vectors can be obtained: $$f_{1}=B_{j}\sigma _{j}^{y}+J\Delta \sigma _{j}^{y}\sigma _{j-1}^{z}-J\sigma
_{j-1}^{y}\sigma _{j}^{z}-J\sigma _{j+1}^{y}\sigma _{j}^{z}+J\Delta \sigma
_{j}^{y}\sigma _{j+1}^{z},$$$$\begin{aligned}
f_{2} &=&\left( \Delta _{1}-B_{j}^{2}-2J^{2}-2J^{2}\Delta ^{2}\right) \sigma
_{j}^{x}+2J^{2}\Delta \sigma _{j-1}^{x}+2J^{2}\Delta \sigma
_{j+1}^{x}+J^{2}\Delta \sigma _{j-1}^{x}\sigma _{j-2}^{y}\sigma _{j}^{y} \\
&&-J^{2}\Delta \sigma _{j-2}^{x}\sigma _{j-1}^{y}\sigma _{j}^{y}+J^{2}\sigma
_{j+1}^{x}\sigma _{j-1}^{y}\sigma _{j}^{y}-2J^{2}\sigma _{j}^{x}\sigma
_{j-1}^{y}\sigma _{j+1}^{y}+J^{2}\sigma _{j-1}^{x}\sigma _{j}^{y}\sigma
_{j+1}^{y} \\
&&-J^{2}\Delta \sigma _{j+2}^{x}\sigma _{j}^{y}\sigma _{j+1}^{y}+J^{2}\Delta
\sigma _{j+1}^{x}\sigma _{j}^{y}\sigma _{j+2}^{y}-2B_{j}J\Delta \sigma
_{j}^{x}\sigma _{j-1}^{z}+\left( B_{j-1}+B_{j}\right) J\sigma
_{j-1}^{x}\sigma _{j}^{z} \\
&&+\left( B_{j-1}+B_{j}\right) J\sigma _{j+1}^{x}\sigma _{j}^{z}+J^{2}\Delta
\sigma _{j-1}^{x}\sigma _{j}^{z}\sigma _{j-2}^{z}-J^{2}\sigma
_{j-2}^{x}\sigma _{j-1}^{z}\sigma _{j}^{z}+J^{2}\Delta \sigma
_{j+1}^{x}\sigma _{j}^{z}\sigma _{j-1}^{z} \\
&&-2B_{j}J\Delta \sigma _{j}^{x}\sigma _{j+1}^{z}-2J^{2}\Delta ^{2}\sigma
_{j}^{x}\sigma _{j-1}^{z}\sigma _{j+1}^{z}+J^{2}\Delta \sigma
_{j-1}^{x}\sigma _{j}^{z}\sigma _{j+1}^{z}-J^{2}\sigma _{j+2}^{x}\sigma
_{j}^{z}\sigma _{j+1}^{z} \\
&&+J^{2}\Delta \sigma _{j+1}^{x}\sigma _{j}^{z}\sigma _{j+2}^{z},\end{aligned}$$etc. The first three norms of the basis vectors are obtained as follows:$$\left( f_{0},f_{0}\right) =1,$$$$\left( f_{1},f_{1}\right) =\overline{B_{j}^{2}}+2\overline{J^{2}}+2\overline{%
J^{2}\Delta ^{2}},$$$$\begin{aligned}
\left( f_{2},f_{2}\right) &=&\overline{\Delta _{1}^{2}}-2\overline{\Delta
_{1}B_{j}^{2}}+\overline{B_{j}^{4}}-4\overline{\Delta _{1}J^{2}}+\overline{%
B_{j-1}^{2}J^{2}}+2\overline{B_{j-1}B_{j}J^{2}}+6\overline{B_{j}^{2}J^{2}}+2%
\overline{B_{j+1}B_{j}J^{2}} \\
&&+\overline{B_{j+1}^{2}J^{2}}+12\overline{J^{4}}-4\overline{\Delta
_{1}J^{2}\Delta ^{2}}+12\overline{B_{j}^{2}J^{2}\Delta ^{2}}+24\overline{%
J^{4}\Delta ^{2}}+8\overline{J^{4}\Delta ^{4}}.\end{aligned}$$Then the continued fraction coefficients can be got from Eq. (\[5b\]).
Next, numerical results of the spin autocorrelation functions $C\left(
t\right) $ and the spectral densities $\Phi \left( \omega \right) $ are given when the external fields satisfy three types of distributions: bimodal distribution, Gaussian distribution and double-Gaussian distribution. With special values of the anisotropy parameter $\Delta $, the effects of the random external fields on the dynamics of the given system are investigated as follows.
Bimodal distribution
--------------------
We first consider the case that the external fields satisfy the bimodal distribution. $$P\left( B_{i}\right) =p\delta \left( B_{i}-B_{1}\right) +\left( 1-p\right)
\delta \left( B_{i}-B_{2}\right) . \label{8}$$For simplicity and without loss of generality, we choose the exchange coupling $J=1.0,$ which sets the energy scale, and the external fields $%
B_{1}=1.8$, $B_{2}=0.2$. For $\Delta $ $=0.01,$ $0.1,$ $0.4$ and $1.0$, the results of the spin autocorrelation function $C\left( t\right) $ and the spectral density $\Phi \left( \omega \right) $ are shown in Fig. 1 and Fig. 2, respectively. The continued-fraction coefficients are presented in the insets.
When $\Delta $ is small (e.g., $\Delta =0.01,$ $0.1$)$,$ the spin autocorrelation function $\left[ \text{see in Figs. 1(a1), (a2)}\right] $, changes from a monotonically decreasing behavior to a damped oscillatory one as $p$ increases. When $p=0$, the exchange coupling energy is higher than the external field energy. The interaction among the spins is stronger than that between the spin and the external field. The dynamics is dominated by the exchange coupling energy and shows a central-peak behavior. When $%
p=0.25 $, the spin autocorrelation function has a slight fluctuation and the fluctuation becomes acute with the increase of the external fields. When $%
p=1 $, the value of the external field is larger than that of the exchange coupling. The system behaves as the precession of independent spins about the field and the exchange coupling causes a damping. Hence, the system presents a collective-mode behavior. Figs. 2(b1) and (b2) show that the peak of $\Phi \left( \omega \right) $ moves from $\omega =0$ to $2$ as $p$ increases, which also reaches the conclusion that the system undergoes a crossover from the central-peak behavior to the collective-mode one with increasing field when $\Delta $ is small.
Figure 1(a3) shows that when $\Delta =0.4$, the dynamics of the system changes from a central-peak regime to a disordered behavior which is intervenient between a central-peak one and a collective-mode one as $p$ increases. The spin autocorrelation function for $p=0$ decays monotonically to 0 and the spectral density is now peaked at $\omega =0.$ So, the system is at the central-peak regime where the dynamics is mostly dominated by the exchange coupling. By comparing the curve for $p=0$ in Figs. 1(a1) or (a2) to the one in Fig. 1(a3), we find that $C\left( t\right) $ for $\Delta =0.4$ decays faster than that for $\Delta =0.01$ or $\Delta =0.1.$ As p increases to 1.0, the system shows not a collective-mode behavior but a disordered one. The spectral density displayed in Fig. 2(b3) tends to have an expansion at high frequency.
When $\Delta =1.0$ \[see Figs. 1(a4) and 2(b4)\], the system presents a disordered behavior as the concentration of $B_{1}$ increases, i.e., in this case, the dynamics of the system can not be characterized by either behavior singly. Specially the case of $p=0$ is very similar to the most-disordered case mentioned in Ref. [@J.; @Florencio]. By comparing the results when $%
\Delta $ $=0.01$ to that of the 1-D XY model, we find that they are very similar, so the effect of the anisotropy parameter can be basically ignored, in other words, the dynamics of the system is governed by the competition between spin-spin interactions and the external fields. Comparing Figs. 1(a1), (a2), (a3) and (a4), it can be found that as the anisotropy parameter $\Delta $ increases, the crossover from the central-peak behavior to the collective-mode one vanishes. When $\Delta =1.0,$ the system becomes the 1-D quantum Heisenberg system. The anisotropy parameter together with the external field and the exchange coupling decide the dynamic behaviors of the system. The competition between the spin-spin interactions and the external fields becomes very fierce which drives the system to be disordered.
Gaussian distribution
---------------------
In the case, the external fields satisfy the Gaussian distribution,$$P\left( B_{i}\right) =\frac{1}{\sqrt{2\pi }\sigma _{B}}\exp \left[ -\left(
B_{i}-B\right) ^{2}/2\sigma _{B}^{2}\right] , \label{9}$$where $B$ is the mean value of the external fields, $\sigma _{B}$ is the standard deviation. Here, we take the exchange coupling $J$ equal to $1.0$ and $B$ equal to $0.0,$ $0.5,$ $1.0,$ $1.5,$ $2.0$. When the anisotropy parameter $\Delta =0.1,$ $0.4$ and $1.0$, the results of the spin autocorrelation function and the spectral density are displayed in Fig. 3 for $\sigma _{B}=0.3$ and Fig. 4 for $\sigma _{B}=3.0.$
Figure 3(a1) shows that when the standard deviation is small $\left( \sigma
_{B}=0.3\right) $ and $\Delta =0.1$, the system shows two types of dynamics as $B$ increases: the central-peak behavior and the collective-mode behavior. In this case, the effect of $\Delta $ can be ignored basically, the dynamics of the system changes according to the concentration of $B.$ As $\Delta $ increases from $0.1$ to $1.0$, a disordered behavior replaces the central-peak behavior or the collective-mode behavior. Figs. 3(a2), (b2) and (c2) also show that the effects of the external fields are to urge the system to show a crossover when $\Delta =0.1$ and as $\Delta $ increases to 1.0, the system displays a disordered behavior.
When the standard deviation is large enough $\left( \sigma _{B}=3.0\right) $\[see Fig. 4\], the system just shows a disordered behavior which is something in between the central-peak behavior and the collective-mode one regardless of the anisotropy parameter. From Fig. 3 and Fig. 4, it is not difficult to see that both the crossover and disordered behavior are replaced by one disordered behavior with the increase of $\sigma _{B}.$ This is because the value of the external field is large and the value range is wide when $%
\sigma _{B}=3.0$. The large external fields drive the spin orientation of the system to be disordered.
Double-Gaussian distribution
----------------------------
Double-Gaussian distribution is a common form of bimodal distribution and Gaussian distribution, which can be used to describe both discrete distribution and continuous one,$$P\left( B_{i}\right) =p\frac{1}{\sqrt{2\pi }\sigma _{B}}\exp \left[ -\left(
B_{i}-B_{1}\right) ^{2}/2\sigma _{B}^{2}\right] +\left( 1-p\right) \frac{1}{%
\sqrt{2\pi }\sigma _{B}}\exp \left[ -\left( B_{i}-B_{2}\right) ^{2}/2\sigma
_{B}^{2}\right] , \label{10}$$where $0\leq p\leq 1$ represents the concentration of $B_{1}$ that satisfies the Gaussian distribution. The external fields satisfy Eq. $\left( \ref{10}%
\right) $, in which the mean values $B_{1}=1.8$ and $B_{2}=0.2,$ and the exchange coupling is constant ($J=1.0$)$.$ $C\left( t\right) $ and $\Phi
\left( \omega \right) $ for $\Delta =0.1,$ $0.4$ and $1.0$ are calculated and the results are shown in Fig. 5 and Fig. 6, respectively. The figures indicate that when $\sigma _{B}=0.3$, the system for $\Delta =0.1$ shows a crossover between a central-peak behavior and a collective-mode one, and a disordered behavior as $\Delta $ increases to $1.0$. However, the system only shows a disordered behavior when $\sigma _{B}=3.0$, no matter what $%
\Delta $ takes.
From above discussion, we can see that the dynamical behavior of the system is affected by the competition between the spin-spin interactions and the external fields, not by the different disordered distribution. Also, it is easy to find that the dynamics of the system is similar to that of the 1-D quantum XY model [@Zhen-Bo; @Xu] when $\Delta $ is small (e.g., $\Delta
=0.01$). When $\Delta =0,$ the XXZ model becomes the isotropy XY model. We find that the above results are the same as those in Ref. [@Zhen-Bo; @Xu] when we take $\Delta =0.$
SUMMARY\[jiayou\]
=================
We have studied the dynamics of the 1-D spin-$1/2$ quantum XXZ model in the random external fields at the high-temperature limit by means of the recurrence relations method. We find that the dynamics of the system with three types of random distributions are affected by the competition among the external field, the anisotropy parameter and the exchange coupling, but the anisotropy parameter can be basically ignored when it is small (e.g., $%
\Delta =0.01$). For the case of bimodal disorder, when the anisotropy parameter $\Delta $ is small (e.g., $\Delta =0.01$), the dynamics of the system undergoes a crossover between a collective-mode behavior and a central-peak one with increasing field; as $\Delta $ increases to 0.4, the dynamics of the system changes from a central-peak regime to a disordered behavior which is intervenient between a central-peak one and a collective-mode one with the increase of the fields; then as $\Delta $ approaches 1, the system shows a disordered behavior. In the cases of Gaussian disorder and double-Gaussian disorder, when the standard deviation of the random field $\sigma _{B}$ is small, a disordered behavior replaces the crossover as $\Delta $ increases. When $\sigma _{B}$ becomes large enough, the system shows only a most disordered behavior regardless of the anisotropy parameter$.$
This work was supported by the National Natural Science foundation of China under Grant NO. 10775088, the Shandong Natural Science foundation under Grant NO. Y2006A05, and the Science foundation of Qufu Normal University. One of the authors (Yin-Yang Shen) thanks Shu-Xia Chen, Fu-Wu Ma, Hong Li and Sha-Sha Li for useful discussions.
[99]{} S. E. Nagler, W. J. L. Buyers, R. L. Armstrong, B. Briat, Phys. Rev. Lett. 49 (1982) 590.
S. E. Nagler, W. J. L. Buyers, R. L. Armstrong, B. Briat, Phys. Rev. B 28 (1983) 3873.
S. E. Nagler, W. J. L. Buyers, R. L. Armstrong, B. Briat, Phys. Rev. B 27 (1983) 1784.
H. Yoshizawa, K. Hirakawa, S. K. Satija, G. Shirane, Phys. Rev. B 23 (1981) 2298.
P. M. Duxbury, J. Oitmaa, M. N. Barber, A. Van Der Bilt, K. O. Joung, R. L. Carlin, Phys. Rev. B 24 (1981) 5149.
H. Yoshizawa, G. Shirane, H. Shiba, K. Hirakawa, Phys. Rev. B 28 (1983) 3904.
A. Oosawa, K. Kakurai, Y. Nishiwaki, T. Kato, J. Phys. Soc. Japan 75 (2006) 074719.
F. B. McClean, M. Blume, Phys. Rev. B 7 (1973) 1149.
A. Sur, I. J. Lowe, Phys. Rev. B 11 (1975) 1980.
J. M. R. Roldan, B. M. McCoy, J. H. H. Perk, Physica A 136 (1986) 255.
U. Brandt, J. Stolze, Z. Phys. B 64 (1986) 327.
M. Böhm, H. Leschke, H. Henneke, V. S. Viswanath, J. Stolze, G. Müller, Phys. Rev. B 49 (1994) 417.
V. S. Viswanath, G. Müller, The Recursion Method$-$Applications to Many-Body Dynamics (Berlin: Springer) (1994).
K. Fabricius, U. Löw, J. Stolze, Phys. Rev. B 55 (1997) 5833.
M. Böhm, V. S. Viswanath, J. Stolze, G. Müller, Phys. Rev. B 49 (1994) 15669.
R. Dekeyser, M. H. Lee, Phys. Rev. B 19 (1979) 265.
M. H. Lee, I. M. Kim, R. Dekeyser, Phys. Rev. Lett. 52 (1984) 1579.
J. A. Plascak, A. S. T. Pires, F. C. Sá Barreto, Solid State Commun. 44 (1982) 787.
J. A. Plascak, F. C. Sá Barreto, A. S. T. Pires, L. L. Goncalves, J. Phys. C 16 (1983) 49.
S. Watarai, T. Matsubara, J. Phys. Soc. Japan 53 (1984) 3648.
R. R. Levitsky, J. Grigas, I. R. Zachek, Y. V. Mits, W. Paprotny, Ferroelectrics 74 (1985) 60.
W. Wu, B. Ellman, T. F. Rosenbaum, G. Aeppli, D. H. Reich, Phys. Rev. Lett. 67 (1991) 2076.
J. Florencio, F. C. Sá Barreto, Phys. Rev. B 60 (1999) 9555.
B. Boechat, C. Cordeiro, J. Florencio, F. C. Sá Barreto, O. F. de Alcantara Bonfim, Phys. Rev. B 61 (2000) 14327;B. Boechat, C. Cordeiro, O. F. de Alcantara Bonfim, J. Florencio, F. C. Sá Barreto, Braz. J. Phys. 30 (2000) 693; O. F. de Alcantara Bonfim, B. Boechat, C. Cordeiro, F. C. Sá Barreto, J. Florencio, J. Phys. Soc. Japan 70 (2001) 829.
M. Eugenia, S. Nunes, J. Florencio, Phys. Rev. B 68 (2003) 014406.
M. E. S. Nunes, J. A. Plascak, J. Florencio, Physica A 332 (2004) 1.
Z.-Q. Liu, X.-M. Kong, X.-S. Chen, Phys. Rev. B 73 (2006) 224412.
Z.-B. Xu, X.-M. Kong, Z.-Q. Liu, Phys. Rev. B 77 (2008) 184414.
S.-X. Chen, Y.-Y. Shen, X.-M. Kong, Phys. Rev. B 82 (2010) 174404.
X.-J. Yuan, X.-M. Kong, Z.-B. Xu, Z.-Q. Liu, Physica A 389 (2010) 242.
N. Laflorencie, H. Rieger, A. W. Sandvik, P. Henelius, Phys. Rev. B 70 (2004) 054430.
M. Žnidarič, T. Prosen., P. Prelovšek, Phys. Rev. B 77 (2008) 064426.
C. A. Doty, D. S. Fisher, Phys. Rev. B 45 (1992) 2167.
D. S. Fisher, Phys. Rev. B 50 (1994) 3799.
S. Haas, J. Riera, E. Dagotto, Phys. Rev. B 48 (1993) 13174.
H. Röder, J. Stolze, R. N. Silver, G. Müller, J. App. Phys. 79 (1996) 4632.
H. Mori, Prog. Theor. Phys. 34, 399 (1965); M. H. Lee, Phys. Rev. Lett. 49 (1982) 1072; M. H. Lee, Phys. Rev. B 26 (1982) 2547; M. H. Lee, J. Math. Phys. 24 (1983) 2512; M. H. Lee, et al. Phys. Scr. 1987 (1987) 498.
J. Stolze, V. S. Viswanath, G. Müller, Z. Phys. B 89 (1992) 45.
**Figure Captions**
Fig. $1$ The spin autocorrelation functions where the external fields take the values $B_{1}=1.8$ with probability $p$ and $B_{2}=0.2$ with probability $\left( 1-p\right) .$ (a1), (a2), (a3), (a4) correspond to the cases of $%
\Delta =0.01,$ $0.1,$ $0.4$ and $1.0$. The continued-fraction coefficients are presented in the insets.
Fig. $2$ The corresponding spectral densities for the same parameters as in Fig. 1. (b1), (b2), (b3), (b4) correspond to the cases of $\Delta =0.01,$ $%
0.1,$ $0.4$ and $1.0$.
Fig. $3$ The spin autocorrelation functions and the spectral densities for the case that the external fields satisfy the Gaussian distribution. The standard deviation $\sigma _{B}$ takes $0.3.$ (a1), (a2) correspond to the case that$\ \Delta $ equals to $0.1,$ (b1), (b2) correspond to the case that $\Delta $ equals to $0.4,$ and (c1), (c2) correspond to the case that$\
\Delta $ equals to $1.0.$
Fig. $4$ The spin autocorrelation functions and the spectral densities for the case that the external fields satisfy the Gaussian distribution. The standard deviation $\sigma _{B}$ takes $3.0.$ (a1), (a2) correspond to the case that$\ \Delta $ equals to $0.1,$ (b1), (b2) correspond to the case that $\Delta $ equals to $0.4,$ and (c1), (c2) correspond to the case that$\
\Delta $ equals to $1.0.$
Fig. $5$ The spin autocorrelation functions for the case that the external fields satisfy the double-Gaussian distribution in which the mean values $%
B_{1}=1.8$ and $B_{2}=0.2$ with probabilities $p$ and $\left( 1-p\right) $. (a1), (a2) and (a3) correspond to the case that the standard deviation $%
\sigma _{B}$ takes $0.3$ and$\ \Delta $ takes the value $0.1$, $0.4,$ $1.0$, respectively. (b1), (b2) and (b3) correspond to the case that$\ $the standard deviation $\sigma _{B}$ takes $3.0$ and$\ \Delta $ takes the value $%
0.1$, $0.4,$ $1.0$, respectively.
Fig. $6$ The spectral densities for the same parameters as in Fig. 5. (a1), (a2) and (a3) correspond to the case that the standard deviation $\sigma _{B}
$ takes $0.3$ and$\ \Delta $ takes the value $0.1$, $0.4,$ $1.0$, respectively. (b1), (b2) and (b3) correspond to the case that$\ $the standard deviation $\sigma _{B}$ takes $3.0$ and$\ \Delta $ takes the value $%
0.1$, $0.4,$ $1.0$, respectively.
|
---
abstract: |
We consider cumulant moments (cumulants) of the thrust distribution using predictions of the full spectrum for thrust including ${\cal O}(\alpha_s^3)$ fixed order results, resummation of singular N$^3$LL logarithmic contributions, and a class of leading power corrections in a renormalon-free scheme. From a global fit to the first thrust moment we extract the strong coupling and the leading power correction matrix element $\Omega_1$. We obtain $\alpha_s(m_Z) = 0.1140 \,\pm\, (0.0004)_{\rm exp} \,\pm\, (0.0013)_{\rm hadr}
\,\pm \, (0.0007)_{\rm pert}$, where the $1$-$\sigma$ uncertainties are experimental, from hadronization (related to $\Omega_1$) and perturbative, respectively, and $\Omega_1=0.377 \,\pm\, (0.044)_{\rm exp} \,\pm\,
(0.039)_{\rm pert}\,{\rm GeV}$. The $n$-th thrust cumulants for $n\ge 2$ are completely insensitive to $\Omega_1$, and therefore a good instrument for extracting information on higher order power corrections, $\Omega_n^\prime/Q^n$, from moment data. We find $(\tilde\Omega_2^\prime)^{1/2} = 0.74 \,\pm\, (0.11)_{\rm exp} \,\pm\,
(0.09)_{\rm pert}\,{\rm GeV}$.
author:
- Riccardo Abbate
- Michael Fickinger
- 'André H. Hoang'
- Vicent Mateu
- 'Iain W. Stewart'
bibliography:
- 'thrust3.bib'
title: ' Precision Thrust Cumulant Moments at N${}^3$LL '
---
Introduction {#sec:intro}
============
The process $e^+e^-\to {\rm jets}$ plays an important role in precise determinations of $\alpha_s(m_Z)$, as well as for probing the nonperturbative dynamics of hadronization in jet production. A wealth of high precision data with percent level uncertainties, is available for jet production in $e^+e^-$ collisions at the Z-pole, $Q=m_Z$, and with somewhat larger uncertainties at both lower and higher energies $Q$. For a review of classic work on $\alpha_s(m_Z)$ determinations using event shapes and other jet observables, the reader is referred to [@Kluth:2006bw]. Accurate predictions for event shapes are now available which include ${\cal O}(\alpha_s^3)$ corrections [@GehrmannDeRidder:2007bj; @GehrmannDeRidder:2007hr; @Weinzierl:2008iv; @Weinzierl:2009ms], a next-to-next-to-next-to-leading-log (N$^3$LL) resummation of large logarithms [@Becher:2008cf; @Chien:2010kc], and a high precision method developed for simultaneously incorporating field theory matrix elements for the power corrections [@Abbate:2010xh].
The majority of fits for $\alpha_s(m_Z)$ from event shapes $e$ make use of cross section distributions ${\mathrm{d}}\sigma/{\mathrm{d}}e$, in a region where nonperturbative effects enter as power corrections in $1/Q$ and the theoretical description is the most accurate. In our recent analysis [@Abbate:2010xh] for the event-shape variable thrust $\tau=1-T$ [@Farhi:1977sg], $$\begin{aligned}
\label{eq:Tdef}
T & \, = \,\mbox{max}_{\hat {\bf t}}\frac{\sum_i|\hat {\bf t}\cdot\vec{p}_i|}
{\sum_i|\vec{p}_i|}
\,,\end{aligned}$$ we obtained a precise determination of $\alpha_s(m_Z)$. Our theoretical description is based on Soft-Collinear Effective Theory (SCET) [@Bauer:2000ew; @Bauer:2000yr; @Bauer:2001ct; @Bauer:2001yt; @Bauer:2002nz], and has several advanced features, such as:
1. Matrix elements and nonsingular terms at order $\alpha_s^3$ using results from [@GehrmannDeRidder:2007bj]. Non-logarithmic terms in the hard function are included at order $\alpha_s^3$ as well.
2. Resummation of the singular logarithmic terms to all orders in $\alpha_s$ up to N${}^3$LL order.
3. Profile functions ($\tau$-dependent scales $\mu_J$, $\mu_S$, $R$, $\mu_{\rm ns}$) that correctly treat the peak region and account for the multijet boundary condition to ensure that predictions converge properly into the known fixed order result in the multijet endpoint region. They allow an accurate theoretical description over the entire range $\tau \in [0,0.5]$.
4. Description of nonperturbative effects with field theory and a fit to a single nonperturbative matrix element of Wilson lines $\Omega_1$ in the tail region (where power corrections are described by an OPE).
5. Definition of $\Omega_1$ in a more stable Rgap scheme [@Hoang:2007vb; @Hoang:2008fs] rather than in $\overline{\rm MS}$. This ensures $\Omega_1$ and the perturbative cross section are free of ${\cal O}(\Lambda_{\rm QCD})$ renormalon ambiguities. An RGE is used to sum large logarithms in the perturbative renormalon subtractions [@Hoang:2008yj; @Hoang:2009yr]. The fit gives $\Omega_1$ with an accuracy of $16\%$.
6. QED final state corrections at ${\cal O}(\alpha)$ and NNLL (counting $\alpha\sim \alpha_s^2$); bottom mass corrections are included using a factorization theorem with log resummation; ${\cal O}(\alpha_s^2)$ axial-singlet terms arising from the large top-bottom mass splitting are included as well.
A two-parameter global fit in the tail of the thrust distribution gives [@Abbate:2010xh] $\alpha_s(m_Z) \, = \, 0.1135 \,\pm\, (0.0002)_{\rm
exp} \,\pm\, (0.0005)_{\rm hadr} \,\pm \, (0.0009)_{\rm pert}$ as well as $\Omega_1=0.323 \,\pm\, (0.009)_{\rm exp}\,\pm\,
(0.013)_{\rm \Omega_2}\pm\, (0.020)_{\rm \alpha_s(m_Z)} \,\pm \, (0.045)_{\rm
pert}$ GeV where $\Omega_1\equiv\Omega_1(R_\Delta,\mu_\Delta)$ is defined in the Rgap scheme at the scales $R_\Delta=\mu_\Delta=2$GeV. For $\alpha_s$ the three uncertainties are the experimental uncertainty, hadronization uncertainty coming mainly from the determination of $\Omega_1$, and the perturbative theoretical uncertainty. This result for $\alpha_s$ is one of the most precise in the literature. It is also one of the lowest, being $3.9\,\sigma$ away from the 2009 world average [@Bethke:2009jm] and $4.0\,\sigma$ from the 2011 world average [@PDG:2012]. For a detailed discussion of $\alpha_s(m_Z)$ determinations see Ref. [@Bethke:2011tr]. The small value of $\alpha_s(m_Z)$ is directly connected to the non-negligible correction from $\Omega_1$ [@Abbate:2010xh], whose fit value is of natural size $\Omega_1\sim \Lambda_{\rm QCD}$. Given the discrepancy, further tests of the theoretical predictions for event shapes are warranted. In this paper we will do so using experimental moments involving the thrust variable.
The property of the N$^3$LL$\,+\,{\cal O}(\alpha_s^3)$ predictions for ${\rm d}\sigma/{\rm d}\tau$ in Ref. [@Abbate:2010xh] that we will exploit is that they are valid in both the dijet and tail regions, where singular and large logarithmic terms in need of resummation arise, and in the multijet region, where fixed order results without log resummation should be used. That is, they are valid for all values of $\tau$ (an improvement over earlier results at this order). Important ingredients are: the inclusion of the nonsingular terms, important away from the peak region; the use of profile functions that turn off resummation in the far-tail region; and the inclusion of a soft function, which is necessary to describe the peak in the dijet region, where nonperturbative effects are ${\cal O}(1)$.
We will use the full $\tau$ range results to analyze moments $M_n$ of the thrust distribution in $e^+e^-\to {\rm jets}$, $$\begin{aligned}
\label{eq:Mndef}
M_n = \dfrac{1}{\sigma} \int_0^{\tau_{\rm max}=1/2}\! {\rm d}\tau \ \tau^n\
\frac{{\rm d}\sigma}{{\rm d}\tau}\,.\end{aligned}$$ Unlike for tail fits, the entire physical $\tau$ range contributes, providing sensitivity to a different region of the spectrum. Experimental results are available for many values of $Q$, and the analysis of systematic uncertainties is to a large extent independent from that for the binned distributions. Thus the outcome for a fit of data for the first moment $M_1$ to $\alpha_s(m_Z)$ and $\Omega_1$ serves as an important cross check of the results obtained in Ref. [@Abbate:2010xh]. The $M_n$ moments are also not sensitive to large logarithms, and hence provide a non-trivial check on whether the N$^3$LL$\,+\,{\cal
O}(\alpha_s^3)$ full spectrum results, which contain a summation of logarithms of $\tau$ with a substantial numerical effect for small $\tau$ values, can reproduce this property. We explore this issue both for central values and for theory uncertainty estimates.
The second purpose of this work is to discuss the structure of higher order power corrections in thrust moments. We find that cumulant moments $M_n^\prime$ (cumulants) are very useful, since they allow for a cleaner separation of the subleading nonperturbative matrix elements compared to the $M_n$ moments of Eq. (\[eq:Mndef\]). Cumulants include the variance $M_2^\prime$ and skewness $M_3^\prime$, and we will consider the first five: $$\begin{aligned}
\label{eq:variance-skewness}
M_1^\prime\,=&\,\,M_1\,, \\
M_2^\prime\,=&\,\,M_2\,-\,M_1^2\,, \nonumber \\
M_3^\prime\,=&\,\,M_3\,-\,3\,M_2\,M_1\,+\,2\,M_1^3\,,\nonumber\\
M_4^\prime\,=&\,\,M_4-4\,M_3\,M_1-3\,M_2^2+12\,M_2\,M_1^2-6\,M_1^4\,,
\nonumber\\
M_5^\prime\,=&\,\,M_5-5\,M_4\,M_1-10\,M_3\,M_2+20\,M_3\,M_1^2\nonumber\\
&+30\,M_2^2M_1-60\,M_1^3\,M_2+24\,M_1^5\,.\nonumber\end{aligned}$$ In the leading order thrust factorization theorem the power correction matrix elements for the moments $M_n$ are called $\Omega_m$ while for the cumulants $M^\prime_n$ they are called $\Omega^\prime_m$. (The $\Omega^\prime_m$ are also related to the $\Omega_m$ by [Eq. ]{} with $M_n\to
\Omega_n$.) In particular, the invariance of the cumulants to shifts in $\tau$ implies that the $M_{n\ge 2}^\prime$ moments are completely insensitive to the leading thrust power correction parameter $\Omega_1$, and hence can provide non-trivial information on the higher order power corrections which enter as $\Omega_n^\prime/Q^n$ and as $1/Q^2$ power corrections from terms beyond the leading factorization theorem. In contrast, for each $M_{n\ge 2}$ there is a term $\sim \alpha_s\Omega_1/Q$ that for larger $Q$s dominates over the $\Omega_m/Q^m$ terms.[^1]
Review of Experiments and Earlier Literature
--------------------------------------------
Dedicated experimental analyses of thrust moments have been reported by various experiments: JADE [@MovillaFernandez:1997fr] measured the first moment at $Q=35,\,44$ GeV, and in [@Pahl:2008uc] reported measurements of the first five moments at $Q=14$, $22$, $34.6$, $35$, $38.3$, $43.8$ GeV; OPAL [@Abbiendi:2004qz] measured the first five moments at $Q=91$, $133$, $177$, $197$ GeV, and there is an additional measurement of the first moment at $Q=161$ GeV [@Ackerstaff:1997kk]; ALEPH [@Heister:2003aj] measured the first moment at $Q=91.2$, $133$, $161$, $172$, $183$, $189$, $196$, $200$, $206$ GeV; DELPHI [@Abdallah:2003xz] has measurements of the first moment at $Q=45.2$, $66$, $76.3$ GeV, measurements of the first three moments at $Q=183$, $189$, $192$, $196$, $200$, $202$, $205$, $207$ GeV [@Abdallah:2004xe], and at $Q=91.2$, $133$, $161$, $172$, $183$ GeV [@Abreu:1999rc]; L3 [@Acciarri:2000hm] measured the first two moments at $Q=91.2$ GeV and other center of mass energies which are superseded by the ones in [@Achard:2004sv] at $Q=41.4$, $55.3$, $65.4$, $75.7$, $82.3$, $85.1$, $130.1$, $136.1$, $161.3$, $172.3$, $182.8$, $188.6$, $194.4$, $200.2$, $206.2$ GeV; TASSO measured the first moment at $Q=14$, $22$, $35$, $44$ GeV [@Braunschweig:1990yd]; and AMY measured the first moment at $Q=55.2$ GeV [@Li:1989sn]. Finally, the variance and skewness have been explicitly measured by DELPHI [@Abreu:1999rc] at $Q=133$, $161$, $172$, $183$ GeV; and OPAL [@Ackerstaff:1997kk] at $Q=161$ GeV. All of the experimental moments will be used in our fits, with the exception of the results in Ref. [@Pahl:2008uc] and data with $Q\le 22\,{\rm GeV}$ where our treatment of $b$-quark mass effects may not suffice.
In principle the JADE results in Ref. [@Pahl:2008uc] supersede the earlier analysis of this data reported in Ref. [@MovillaFernandez:1997fr]. In the more recent analysis the contribution of primary $b\bar{b}$ events has been subtracted using Monte Carlo generators.[^2] Since the theoretical precision of these generators is significantly worse than our N$^3$LL$\,+\,{\cal
O}(\alpha_s^3)$ treatment of massless quark effects and our NNLL+${\cal
O}(\alpha_s)$ treatment of $m_b$-dependent corrections, it is not clear how our code should be modified consistently to account for these subtractions. Comparing the old versus new JADE data at $Q=44\,{\rm GeV}$ one finds $M_1=0.0860 \pm 0.0014$ versus $M_1=0.0807\pm 0.0016$. This corresponds to a $3.4\,\sigma$ change assuming 100% correlated uncertainties (or a $2.6\,\sigma$ change with uncorrelated uncertainties). In our analysis we find that the older JADE data provides more consistent results when employed in a combined fit with data from the other experiments (related to smaller $\chi^2$ values). For this reason our default dataset incorporates only the older JADE moment data. We will report on the change that would be induced by using the new JADE data if we simply ignore the fact that the $b\bar b$ events were removed.
Event shape moments have also been extensively studied in the theoretical literature. The ${\cal O}(\alpha_s^3)$ QCD corrections for event shape moments have been calculated in Ref. [@GehrmannDeRidder:2009dp; @Weinzierl:2009yz]. The leading $\Lambda/Q$ power correction to the first moment of event shape distributions were first studied in [@Dokshitzer:1995zt; @Akhoury:1995sp; @Akhoury:1995fb; @Nason:1995hd] often with the study of renormalons (see [@Korchemsky:1994is], and [@Beneke:1998ui] for a review). Ref. [@Gardi:2000yh] made a renormalon analysis of the second moment of the thrust distribution, finding that the leading renormalon contribution is not $1/Q^2$ but rather $1/Q^3$. Hadronization effects have also been frequently considered in the framework of the dispersive model for the strong coupling [@Dokshitzer:1995zt; @Dokshitzer:1995qm; @Dokshitzer:1998pt][^3]. In this approach an IR cutoff $\mu_I$ is introduced and the strong coupling constant below the scale $\mu_I$ is replaced by an effective coupling $\alpha_{\rm eff}$ such that perturbative infrared effects coming from scales below $\mu_I$ are subtracted. In the dispersive model the term $\mu_I\alpha_0$ is the analog of the QCD matrix element $\Omega_1$ that is derived from the operator product expansion (OPE). Since in the dispersive model there is only one nonperturbative parameter, it does not contain analogs of the independent nonperturbative QCD matrix elements $\Omega_{n\ge 2}$ of the operator product expansion. Thus measurements of $\Omega^\prime_{n\ge 2}$ can be used as a test for additional nonperturbative physics that go beyond this framework.
The dispersive model has been used in Refs. [@Biebel:2001dm; @Abbiendi:2004qz; @Pahl:2009aa] together with ${\cal O}(\alpha_s^2)$ fixed order results to analyze event shape moments, fitting simultaneously to $\alpha_s(m_Z)$ and $\alpha_0$. Recently these analyses have been extended to ${\cal O}(\alpha_s^3)$ in Ref. [@Gehrmann:2009eh], based on code for $n_f=5$ massless quark flavors, using data from [@Abbiendi:2004qz; @Pahl:2008uc] and fitting to the first five moments for several event-shape variables. Our numerical analysis only considers thrust moments, but with a global dataset from all available experiments. A detailed comparison with Ref. [@Gehrmann:2009eh] will be made at appropriate points in the paper. Theoretically our analysis goes beyond their work by using a formalism that has no large logarithms in the renormalon subtraction, includes the analog of the “Milan factor” [@Dokshitzer:1997iz; @Dokshitzer:1998pt] in our framework at ${\cal
O}(\alpha_s^3)$ (one higher order than [@Gehrmann:2009eh]), and incorporates higher order power corrections beyond the leading shift from $\Omega_1$. We also test the effect of including resummation.
Outline
-------
This article is organized as follows: We start out by defining moments and cumulants of distributions, and their respective generating functions in Sec. \[sec:bsg\], where we also discuss the leading and subleading power corrections of thrust moments in an OPE framework. In Sec. \[sec:results\] we present and discuss our main results for $\alpha_s(m_Z)$ from fits to the first thrust moment $M_1$. In Sec. \[sec:comparison\] we analyze higher moments $M_{n\ge 2}$. Sec. \[sec:power-data\] contains an analysis of subleading power corrections from fits to cumulants $M_{n\ge 2}^\prime$ obtained from the moment data. Our conclusions are presented in Sec. \[eq:conclusions\].
Formalism {#sec:bsg}
=========
Various Moments of a Distribution
---------------------------------
The moments of a probability distribution function $p(k)$ are given by $$\begin{aligned}
M_n=\langle k^n \rangle=\int\! {\rm d}k\: p(k)\,k^n.\end{aligned}$$ The characteristic function is the generator of these moments and is defined as the Fourier transform $$\begin{aligned}
\tilde p(y)=\langle e^{-iky} \rangle=\!\int\! {\rm
d}k\,p(k)\,e^{-iky}=\sum_{n=0}^\infty\,\frac{(-iy)^n}{n!}\,M_n,\end{aligned}$$ with $M_0=1$. The logarithm of $\tilde p(y)$ generates the cumulants (or connected moments) $M^\prime_n$ of the distribution $$\begin{aligned}
\ln\,\tilde p(y)=\sum_{n=1}^{\infty}\frac{(-iy)^n}{n!}M^\prime_n\,,
\label{eq:cumulantdef}\end{aligned}$$ and is called the cumulant generating function. For $n\ge 2$ the cumulants have the property of being invariant under shifts of the distribution. Replacing $p(k)\to
p(k-k_0)$ takes $\tilde p(y) \to e^{-iy k_0}\, \tilde p(y)$, which shifts $M_1^\prime\to M_1^\prime + k_0$ while leaving all $M_{n\ge 2}^\prime$ unchanged. Writing $$\begin{aligned}
\label{eq:relateM}
\sum_{N=0}^\infty\,\frac{(-iy)^N}{N!}\,M_N
&=\exp\bigg[\sum_{j=1}^{\infty}\frac{(-iy)^j}{j!}M^\prime_j\bigg]
\nonumber\\
&=\prod_{j=1}^{\infty}\sum_{R=0}^{\infty}\frac{(-iy)^{jR}}{R!}
\bigg(\frac{M^\prime_j}{j!}\bigg)^R\,,\end{aligned}$$ one can derive an all-$n$ relation between moments and cumulants of a distribution: $$\begin{aligned}
\label{eq:MNpartition}
M_N=N!\sum_{i=1}^{p(N)} \prod_{j=1}^{N}
\frac{(M_j^\prime)^{\kappa_{ij}}}{\kappa_{ij}!\,(j!)^{\kappa_{ij}} }\, .\end{aligned}$$ Here the $\kappa_{ij}$ are non-negative integers which determine a partition of the integer $N$ through $\sum_{j=1}^N j\, \kappa_{ij} = N$, and $p(N)$ is the the number of unique partitions of $N$. (A partition of $N$ is a set of integers which sum to $N$. Here $\kappa_{ij}$ is the number of times the value $j$ appears as a part in the $i$’th partition, and corresponds to $R$ in [Eq. ]{}.) As an example we quote the relation for $N=4$ which has five partitions, $p(4)=5$, giving $$\begin{aligned}
M_4=&M_4^\prime+4\,M_3^\prime\, M_1^\prime+3\,M_2'^2+6\,M_2^\prime
\,M_1'^2+M_1'^4\,.\end{aligned}$$ In the fourth partition, $i=4$, we have $\kappa_{41}=2$, $\kappa_{42}=1$, and $\kappa_{43}=\kappa_{44}=0$, and the factorials give the prefactor of $6$. [Eq. ]{} gives the moments $M_i$ in terms of the cumulants $M_i^\prime$, and these relations can be inverted to yield the formulas quoted for the cumulants in [Eq. ]{}. $M_2^\prime\ge 0$ is the well known variance of the distribution. Higher order cumulants can be positive or negative. The skewness of the distribution $M_3^\prime$ provides a measure of its asymmetry, and we expect $M_3^\prime>0$ for thrust with its long tail to the right of the peak. The kurtosis $M_4^\prime$ provides a measure of the “peakedness” of the distribution, where $M_4^\prime>0$ for a sharper peak than a Gaussian.[^4]
The shift independence of the cumulants $M_n^\prime$ make them an ideal basis for studying event shape moments. In particular, since the leading ${\cal
O}(\Lambda_{\rm QCD}/Q)$ power correction acts similar to a shift to the event shape distribution [@Dokshitzer:1995zt; @Dokshitzer:1995qm; @Dokshitzer:1997ew; @Lee:2006fn; @Lee:2006nr], we can anticipate that $M_{n\ge 2}^\prime$ will be more sensitive to higher order power corrections. We will quantify this statement in the next section by using factorization for the thrust distribution to derive factorization formulae for the thrust cumulants in the form of an operator product expansion.
Thrust moments {#sec:thrust-moments}
--------------
We will first make use of the leading order factorization theorem, ${\mathrm{d}}\sigma/{\mathrm{d}}\tau=\int {\rm d}p\, ({\mathrm{d}}\hat\sigma/{\mathrm{d}}\tau)(\tau-p/Q)F_\tau(p)$, which is valid for all $\tau$. It separates perturbative ${\mathrm{d}}\hat\sigma/{\rm
d}\tau$ and nonperturbative $F_\tau(p)$ contributions to all orders in $\alpha_s$ and $\Lambda_{\rm QCD}/(Q\,\tau)$, but is only valid at leading order in $\Lambda_{\rm QCD}/Q$. For this factorization theorem we follow Ref. [@Abbate:2010xh] (except that here we denote the nonperturbative soft function by $F_\tau$).[^5] We will then extend our analysis to parameterize corrections to all orders in $\Lambda_{\rm QCD}/Q$.
Taking moments of the leading order ${\mathrm{d}}\sigma/{\mathrm{d}}\tau$ gives[^6] $$\begin{aligned}
\label{eq:Mnfull}
M_n &=\int_0^{\tau_{\rm m}}{\rm d}\tau\,\tau^n\,
\int_0^{Q\tau}\,{\rm d}p\,\frac{1}{\hat\sigma}\,
\frac{{\rm d}\hat{\sigma}}{{\rm d}\tau}
\Big(\tau-\frac{p}{Q}\Big)\,F_{\tau}(p)
\\
&=\int_0^{\infty} \!\!\!\! {\rm d}\tau\, {\rm d}p\:
\theta\Big(\tau_m \!-\!\tau\!-\!\frac{p}{Q}\Big)\,
\Big(\tau+\frac{p}{Q}\Big)^n
\frac{1}{\hat\sigma}\,\frac{{\rm d}\hat{\sigma}}{{\rm
d}\tau}(\tau)\,F_{\tau}(p)
{\nonumber}\\
&= \bigg[\sum_{\ell=0}^n\,\binom{n}{\ell}\,\Big(\frac{2}{Q}\Big)^{n-\ell}\,
\hat{M}_{\ell}\: \Omega_{n-\ell} \bigg] - E_n^{(A)} - E_n^{(B)} \,,
{\nonumber}\end{aligned}$$ where $\hat \sigma$ is the perturbative total hadronic cross section and all hatted quantities are perturbative. In the last line of [Eq. ]{} we used $\theta(\tau_m-\tau-p/Q)=\theta(\tau_m-\tau)
[1- \theta(p/Q-\tau_m) -\theta(\tau_m-p/Q)\,\theta(p/Q+\tau-\tau_m)]$ to obtain the three terms. In [Eq. ]{} the term in square brackets is our desired result containing the perturbative $\hat M_n$ and nonperturbative $\Omega_n$ moments $$\begin{aligned}
\label{eq:Mnhat}
\hat{M}_n=&\int_0^{\tau_{\rm m}}{\rm d}\tau\:\tau^n\,\frac{1}{\sigma}\,
\frac{{\rm d}\hat{\sigma}}{{\rm d}\tau}\,(\tau)
\,,
& \hat M_0 & = 1
\,,\\
\Omega_n=&\int_0^\infty\!\! {\rm d}p\ \Big(\frac{p}{2}\Big)^n\,F_{\tau}(p)
\,,
& \Omega_0 & = 1
\,. {\nonumber}\end{aligned}$$ The small “error” terms in [Eq. ]{} are given by $$\begin{aligned}
E_n^{(A)} &= \sum_{\ell=0}^n \binom{n}{\ell}
\Big(\frac{2}{Q}\Big)^{n-\ell} \hat{M}_{\ell}
\int_{Q\,\tau_m}^{\infty}\!\!\!\! {\mathrm{d}}p\:
\Big(\frac{p}{2}\Big)^{n-\ell} F_\tau(p)
, \\
E_n^{(B)} &=\!\int_0^{\tau_m}\!\! {\mathrm{d}}\tau\!
\int^{Q\tau_m}_{Q(\tau_m-\tau)}\!\!{\mathrm{d}}p\:
\Big(\tau+\frac{p}{Q}\Big)^n
\frac{1}{\hat\sigma} \frac{{\rm d}\hat{\sigma}}{{\rm d}\tau}(\tau) F_{\tau}(p)
\,.{\nonumber}\end{aligned}$$ For the contribution $E_n^{(A)}$ the $p$-integral is smaller than $10^{-30}$ for any $Q$ for the first five moments, and hence $E_n^{(A)}\simeq 0$. This occurs because $F_{\tau}(p)$ falls off exponentially for $p\gtrsim 2\,\Omega_1 \sim
2\,\Lambda_{\rm QCD}$ [@Hoang:2007vb; @Ligeti:2008ac], and hence values $p\ge
Q\,\tau_m=Q/2$ are already far out on the exponential tail. The $E_n^{(B)}$ term gives a small contribution because the integral is suppressed by either $F_{\tau}$ or ${\mathrm{d}}\hat\sigma/{\mathrm{d}}\tau$: near the endpoint $\tau\sim\tau_m - 2\,\Lambda_{\rm QCD}/Q$ the $p$-integration is not restricted and $F_\tau(p)\sim 1$, but ${\mathrm{d}}\hat\sigma/{\mathrm{d}}\tau$ is highly suppressed. For smaller $\tau$ the $p$-integration is restricted and the exponential tail of $F_{\tau}(p)$ suppresses the contribution. We have checked numerically that at $Q=91.2\,$GeV \[$Q=35\,$GeV\], for the first moment the relative contribution of $E_1^{(B)}$ compared to the term in square brackets in [Eq. ]{} is $\mathcal{O}(10^{-7})\, [\,\mathcal{O}(10^{-6})\,]$, while for the fifth moment $E_5^{(B)}$ it is $\mathcal{O}(10^{-6})\,[\,\mathcal{O}(10^{-4})\,]$. This suppression does not rely on the model used for $F_\tau(p)$. Thus $E_n^{(B)}$ can also be safely neglected.
Within the theoretical precision we conclude that the leading factorization theorem for the distribution yields an operator product expansion that separates perturbative and nonperturbative corrections in the moments $$\begin{aligned}
\label{eq:Mnfact}
M_n = &\sum_{\ell=0}^n\,\binom{n}{\ell}\,\Big(\frac{2}{Q}\Big)^{n-\ell}\,
\hat{M}_{\ell}\: \Omega_{n-\ell}\,.\end{aligned}$$ For $M_n$ the terms that numerically dominate are $\hat M_n$ and $\hat
M_{n-1}\Omega_1/Q$. However for the cumulants $M_n^\prime$ there are cancellations, and [Eq. ]{} does not suffice due to our neglect so far of $(\Lambda_{\rm QCD}/Q)^j$ suppressed terms in the factorization expression for the thrust distribution.
To rectify this we parameterize the $(\Lambda_{\rm QCD}/Q)^j$ power corrections by a series of power suppressed nonperturbative soft functions, $\Lambda^{j-1}
F_{\tau,j}(p/\Lambda)\sim \Lambda_{\rm QCD}^{j-1}$. Here $\Lambda^{-1}
F_{\tau,0}(p/\Lambda)=F_\tau(p)$ is the leading soft function from [Eq. ]{}. We introduced the parameter $\Lambda=400\,{\rm MeV}\sim \Lambda_{\rm QCD}$ to track the dimension of these subleading soft functions. This parameterization is motivated by the fact that subleading factorization results can in principle be derived with SCET [@Lee:2004ja], and at each order in the power expansion will yield new soft function matrix elements.
Both the factorization analysis and calculation of cumulants is simpler in Fourier space, so we let $$\begin{aligned}
\sigma(y)\equiv&\int{\mathrm{d}}\tau\,e^{-iy\tau}\,\frac{{\mathrm{d}}\sigma}{{\mathrm{d}}\tau}(\tau)
\,,\\
F_{\tau,j}(z\,\Lambda)\equiv&\int \frac{{\mathrm{d}}p}{\Lambda}\,
e^{-iz p}\,F_{\tau,j}\left( \frac{p}{\Lambda}\right)
\,,\nonumber\end{aligned}$$ and likewise for the leading power partonic cross section ${\mathrm{d}}\hat\sigma/{\mathrm{d}}\tau(\tau) \to \hat\sigma_0(y)$. The factorization-based formula for thrust is then $$\begin{aligned}
\label{eq:sublead_fact_thm}
\frac{1}{\sigma}\,\sigma (y)\,&=
\frac{1}{\hat \sigma}\sum_{j=0}^\infty
\Big(\frac{\Lambda}{Q}\Big)^j\,
\hat{\sigma}_j(y)\,
F_{\tau,j}\Big(\frac{y\,\Lambda}{Q}\Big)\,,\end{aligned}$$ where $\hat \sigma_{j>0}(y)$ accounts for perturbative corrections in the $(\Lambda_{\rm QCD}/Q)^j$ power correction. The $j=0$ term is equivalent to the result used in [Eq. ]{}, $F_{\tau}(p)=\Lambda F_{\tau,0}(p/\Lambda)$, and the normalization condition for the leading nonperturbative soft function is $F_{\tau,0}(z=0)=1$. The terms in [Eq. ]{} beyond $j=0$ are schematic since in reality they may involve convolutions in more variables in the nonperturbative soft functions (as observed in the subleading $b\to
s\,\gamma$ factorization theorem results [@Bauer:2001mh; @Bauer:2002yu; @Leibovich:2002ys; @Lee:2004ja; @Bosch:2004cb; @Beneke:2004in]). Nevertheless the scaling is correct, and [Eq. ]{} will suffice for our analysis where we only seek to classify how various power corrections could enter higher moments or cumulants.
The identities $\sigma(y=0)/\sigma=1$ and $\hat\sigma_0(y=0)/\hat\sigma =1$ together with [Eq. ]{} imply $$\begin{aligned}
\label{eq:propsubleading}
F_{\tau, j}(y=0)=0\,,\qquad {\rm for }\,\, j\geq 1\,.\end{aligned}$$ Using the Fourier-space cross section the moments are $$\begin{aligned}
\label{eq:Mny}
M_n\,=&\,\,i^n\,\frac{{\rm d}^n}{{\rm d}y^n}
\bigg[ \frac{1}{\sigma}\,{\sigma}(y)\bigg]_{y=0}
\\
=&\,i^n\,\frac{{\rm d}^n}{{\rm d}y^n}\bigg[
\frac{1}{\hat \sigma}\sum_{j=0}^\infty \hat{ \sigma}_j(y)\,
\Big(\frac{\Lambda}{Q}\Big)^j\,{F}_{\tau,j}\Big(\frac{y\,\Lambda}{Q}\Big)
\bigg]_{y=0}
\nonumber\\
=&\sum_{j=0}^{\infty}\Big(\frac{1}{Q}\Big)^{j}\,
\sum_{\ell=0}^n\binom{n}{\ell}\,\hat{M}_{n-\ell,j}\,
\Big(\frac{2}{Q}\Big)^{\ell}\,\Omega_{\ell,j}
\,, {\nonumber}\end{aligned}$$ which extends the OPE in [Eq. ]{} to parameterize the $(\Lambda_{\rm
QCD}/Q)^j$ power corrections. Here the perturbative and nonperturbative moments are defined as $$\begin{aligned}
\label{eq:MnjOnj}
\hat M_{n,j}\,=&\,\,i^n\,\frac{{\rm d}^n}{{\rm d}y^n}
\bigg[ \frac{1}{\hat\sigma}\,\hat {\sigma}_j(y)\bigg]_{y=0}
\,,\nonumber\\
\Omega_{n,j}\,=&\,\,\frac{i^n}{2^n}\,
\frac{{\rm d}^n}{{\rm d}z^n}
\bigg[\Lambda^j\,{F}_{\tau,j}\big(z\,\Lambda\big)\bigg]_{z=0}
\,,\end{aligned}$$ where $\hat M_{n,j}$ is a dimensionless series in $\alpha_s(\mu)$ and $\Omega_{n,j}\sim \Lambda_{\rm QCD}^{n+j}$. In order for $\hat M_{n,j}$ to exist it is crucial that our $\hat \sigma_j(y)$ and its derivatives do not contain $\ln(y)$ dependence in the $y\to 0$ limit at any order in $\alpha_s$. In $\tau$-space the perturbative coefficients have support over a finite range, $\tau\in [0,1/2]$, and $$\begin{aligned}
\label{eq:sigyexist}
\hat\sigma_j(y) &= \int_0^{1/2}\!\! {\mathrm{d}}\tau\: e^{-i\tau y}\: \hat\sigma_j(\tau)
\,. \end{aligned}$$ Therefore the existence of $\int_0^{1/2} {\mathrm{d}}\tau\: \hat \sigma_j(\tau)$ implies a well defined Taylor series in $y$ under the integrand in [Eq. ]{}, and hence the existence of $\hat M_{n,j}$. This integral is the total perturbative cross section for $j=0$. From [Eq. ]{} we have $\Omega_{0,j>0}=0$, and furthermore $\Omega_{n,0}=\Omega_n$ and ${\hat M}_{n,0}={\hat M}_n$.
For the first moment, [Eq. ]{} yields $$\begin{aligned}
\label{eq:M1final}
M_1\, &= \hat M_1\: +\: \frac{2\,\Omega_1}{Q} \:
+ \: \sum_{j=0}^\infty \hat M_{0,1+j} \frac{2\, \Omega_{1,1+j}}{Q^{2+j} } \,,\end{aligned}$$ where the first two terms are determined by the leading order factorization theorem, while the last term identifies the scaling of contributions from $(\Lambda_{\rm QCD}/Q)^{2+j}$ power corrections. Two properties of [Eq. ]{} will be relevant for our analysis: first, there is no perturbative Wilson coefficient for the leading $2\,\Omega_1/Q$ power correction; and second, terms from beyond the leading factorization theorem only enter at ${\cal
O}(\Lambda_{\rm QCD}^2/Q^2)$ and beyond. For higher order moments, $n\ge 2$, we have $$\begin{aligned}
\label{eq:MnOPE}
M_n &= \hat M_n+ \frac{2\,n\,\Omega_1}{Q}\,\hat M_{n-1}
+\frac{n(n-1)\Omega_2}{Q^2}\,\hat M_{n-2}\,\nonumber\\
&+\frac{2\,n\,\Omega_{1,1}}{Q^2}\hat
M_{n-1,1}+\mathcal{O}\Big(\frac{1}{Q^3}\Big)\,.\end{aligned}$$
Next we derive an analogous expression for the $n$-th order cumulants for $n\ge 2$, which are generated from Fourier space by $$\begin{aligned}
M^\prime_n\,=&\,\,i^n\,\frac{{\rm d}^n}{{\rm d}y^n}
\bigg[\ln \frac{\sigma(y)}{\sigma}\bigg]_{y=0}
\,.\end{aligned}$$ Eq. (\[eq:sublead\_fact\_thm\]) can be conveniently written as the product of three terms $$\begin{aligned}
\label{eq:prod}
\frac{1}{\sigma}\,{\sigma}(y)\,
=\,&\frac{1}{\hat\sigma}\,\hat{\sigma}_0(y)\, \times
{F}_{\tau,0}\Big(\frac{y\,\Lambda}{Q}\Big)
\\
\times &\bigg[1+\sum_{j=1}^\infty\overline{ \sigma}_j(y)\,
\bigg(\frac{\Lambda}{Q}\bigg)^j\,
\overline{{F}}_{\tau,j}\bigg(\frac{y\,\Lambda}{Q}\bigg)\bigg]
\,,\nonumber\end{aligned}$$ where bars indicate the ratios $$\begin{aligned}
\overline{{\sigma}}_j(y)=\frac{\hat{ \sigma}_j(y)}{\hat{ \sigma}_0(y)},\qquad
\overline{{F}}_{\tau,j}(x)=\frac{{F}_{\tau,j}(x)}{{F}_{\tau,0}(x)}.\end{aligned}$$ From [Eq. ]{} we have ${\overline F}_{\tau,j}(x=0)=0$ for all $j\ge
1$. Taking the logarithm of [Eq. ]{} expresses the thrust cumulants by the sum of three terms $$\begin{aligned}
\label{eq:Mnint}
M^\prime_n &=\,\hat M^\prime_n+\bigg(\frac{2}{Q}\bigg)^n\Omega^\prime_n
+i^n\,\frac{{\rm d}^n}{{\rm d}y^n}\sum_{k=1}^\infty\dfrac{(-1)^{k+1}}{k}
\nonumber\\
& \times \bigg[\sum_{j=1}^\infty\overline{\sigma}_j(y)\,
\bigg(\frac{\Lambda}{Q}\bigg)^j\,\overline{{F}}_{\tau,j}\bigg(\frac{y\,\Lambda}{
Q}\bigg)\bigg]^k\bigg|_{y=0}
\,. \end{aligned}$$ The first two terms involve the perturbative cumulants $\hat M^\prime_n$ and the cumulants of the leading nonperturbative soft functions $\Omega_n^\prime$, $$\begin{aligned}
\hat M^\prime_n\,&=\,\,i^n\,\frac{{\rm d}^n}{{\rm d}y^n}
\bigg[\ln \frac{1}{\sigma}\,\hat{ \sigma}_0(y)\bigg]_{y=0}
\,,\\
\Omega^\prime_n\,&=\frac{i^n}{2^n}\,\frac{{\rm d}^n}{{\rm d}z^n}
\bigg[\ln F_{\tau,0}(z\Lambda)\bigg]_{z=0}
\,.\nonumber\end{aligned}$$ The third term in [Eq. ]{} represents contributions from power-suppressed terms that are not contained in the leading thrust factorization theorem. These terms start at ${\cal O}(\Lambda_{\rm QCD}^2/Q^2)$. At this order only ${\overline F}_{\tau,1}$ has to be considered. The terms ${\overline F}_{\tau,i>2}$ do not contribute due to explicit powers of $\Lambda_{\rm QCD}/Q$. Concerning ${\overline
F}_{\tau,2}$, it must be hit by at least one derivative because ${\overline F}_{\tau,2}(0)=0$, and hence does not contribute as well. Performing the $n$-th derivative at $y=0$ and keeping only the dominant term from the power corrections gives the OPE $$\begin{aligned}
\label{eq:OPE-subleading}
M^\prime_n&=\hat M^\prime_n+ \frac{2^n\Omega^\prime_n }{Q^n} + n \,
{\overline M}_{n-1,1} \dfrac{2\,\Omega_{1,1}}{Q^2}\!
+ {\cal O}\Big(\dfrac{\Lambda^3_{\rm QCD}}{Q^3}\Big) .\end{aligned}$$ Here $\Omega_{1,1}$ is defined in [Eq. ]{}. The perturbative coefficient is $$\label{eq:subleading-coeff}
{\overline M}_{j,1}=\bigg[i^j\dfrac{{\mathrm{d}}^j}{{\mathrm{d}}y^j}\,
{\overline \sigma}_1(y)\bigg]_{y=0}$$ and so far unknown. For $n=2$ the absence of a $1/Q$ power correction in Eq. (\[eq:OPE-subleading\]) was discussed in Ref. [@Korchemsky:2000kp].
The majority of our analysis will focus on $M_1$ where terms beyond the leading order factorization theorem are power suppressed. For our analysis of $M_{n\ge
2}$ we consider the impact of both $\alpha_s\Omega_1/Q$ corrections, and power corrections suppressed by more powers of $1/Q$. When we analyze $M_{n\ge
2}^\prime$ we will consider both $1/Q^n$ and $1/Q^2$ power corrections in the fits.
Results for $\mathbf{M_1}$ {#sec:results}
==========================
In this section we present the main results of our analysis, the fits to the first moment of the thrust distribution and the determination of $\alpha_s(m_Z)$ and $\Omega_1$. Prior to presenting our final numbers in Sec. \[subsec:finalresult\] we discuss various aspects important for their interpretation. In Sec. \[sub:ingredients\] we discuss the role of the log-resummation contained in our fit code, the perturbative convergence for different kinds of expansion methods, and we illustrate the numerical impact of power corrections and the renormalon subtraction. We also briefly discuss the degeneracy between $\alpha_s(m_Z)$ and $\Omega_1$ that motivates carrying out global fits to data covering a large range of $Q$ values. In Sec. \[sec:erroranalysis\] we present the outcome of the theory parameter scans, on which the estimate of theory uncertainties in our fits are based, and show the final results. We also display results for the fits at various levels of accuracy. Sec. \[subsec:QED\] briefly discusses the effects of QED and bottom mass corrections. Sec. \[sec:FO\] shows the results of a fit in which renormalon subtractions and power corrections are included, but resummation of logs in the thrust distribution is turned off.
![Theoretical computations at various orders in perturbation theory for the total hadronic cross section at the Z-pole normalized to the Born-level cross section $\sigma_0$. Here the small blue points correspond to fixed order perturbation theory, green squares to resummation without renormalon subtractions, and red triangles to resummation with renormalon subtractions. \[fig:M0norm\] ](figs/M0-resum-norm_ac){width="48.50000%"}
![ Theoretical prediction for the first three moments at the Z-pole at various orders in perturbation theory. The blue circles correspond to fixed order perturbation theory (normalized with the total hadronic cross section) at ${\cal O}(\alpha_s)$, ${\cal O}(\alpha_s^2)$ and ${\cal
O}(\alpha_s^3)$, green squares correspond to resummed predictions at NLL, NNLL, and N${}^3$LL normalized with the total hadronic cross section, and red triangles correspond to resummation normalized with the norm of the resummed distribution. For these plots we use $\alpha_s(m_Z)=0.114$. \[fig:Mnorm\] ](figs/M1-resum-norm_ac "fig:"){width="48.50000%"} ![ Theoretical prediction for the first three moments at the Z-pole at various orders in perturbation theory. The blue circles correspond to fixed order perturbation theory (normalized with the total hadronic cross section) at ${\cal O}(\alpha_s)$, ${\cal O}(\alpha_s^2)$ and ${\cal
O}(\alpha_s^3)$, green squares correspond to resummed predictions at NLL, NNLL, and N${}^3$LL normalized with the total hadronic cross section, and red triangles correspond to resummation normalized with the norm of the resummed distribution. For these plots we use $\alpha_s(m_Z)=0.114$. \[fig:Mnorm\] ](figs/M2-resum-norm_ac "fig:"){width="48.50000%"} ![ Theoretical prediction for the first three moments at the Z-pole at various orders in perturbation theory. The blue circles correspond to fixed order perturbation theory (normalized with the total hadronic cross section) at ${\cal O}(\alpha_s)$, ${\cal O}(\alpha_s^2)$ and ${\cal
O}(\alpha_s^3)$, green squares correspond to resummed predictions at NLL, NNLL, and N${}^3$LL normalized with the total hadronic cross section, and red triangles correspond to resummation normalized with the norm of the resummed distribution. For these plots we use $\alpha_s(m_Z)=0.114$. \[fig:Mnorm\] ](figs/M3-resum-norm_ac "fig:"){width="48.50000%"}
For our moment analysis we use the thrust distribution code developed in Ref. [@Abbate:2010xh], where a detailed description of the various ingredients may be found. We are able to perform fits with different level of accuracy: fixed order at $\mathcal{O}(\alpha_s^3)$, resummation of large logarithms to accuracy[^7], power corrections, and subtraction of the leading renormalon ambiguity. Recently the complete calculation of the ${\cal O}(\alpha_s^2)$ hemisphere soft function has become available [@Kelley:2011ng; @Hornig:2011iu; @Monni:2011gb], so the code is updated to use the fixed parameter $s_2=-40.6804$ from Refs. [@Kelley:2011ng; @Monni:2011gb]. A feature of our code is its ability to describe the thrust distribution in the whole range of thrust values. This is achieved with the introduction of what we call profile functions, which are $\tau$-dependent factorization scales. In the $e^+\,e^-$ annihilation process there are three relevant scales: hard, jet and soft, associated to the center of mass energy, the jet mass and the energy of soft radiation, respectively. The purpose of $\tau$-dependent profile functions for these scales is to smoothly interpolate between the peak region where we must ensure that $\mu_i >
\Lambda_{\rm QCD}$, the dijet region where the summation of large logs is crucial, and the multijet region where regular perturbation theory is appropriate to describe the partonic contribution [@Abbate:2010xh]. The major part of the higher order perturbative uncertainties are directly related to the arbitrariness of the profile functions, and are estimated by scanning the space of parameters that specify them. For details on the profile functions and the parameter scans we refer the reader to App. \[app:scan\]. We note that our distribution code was designed for $Q$ values above $22$ GeV.
Ingredients {#sub:ingredients}
-----------
The theoretical fixed order expression for the thrust moments contain no large logarithms, so we might not expect that the resummation of logarithms in the thrust spectrum will play a role in the numerical analysis. We will show that there is nevertheless some benefit in accounting for the resummation of thrust logarithms. This is studied in Figs. \[fig:M0norm\] and \[fig:Mnorm\], where for $Q=m_Z$ we compare the theoretical value of moments of the thrust distribution obtained in fixed order with those obtained including resummation. (The error bars for the fixed order expansion arise from varying the renormalization scale $\mu$ between $Q/2$ and $2\,Q$ and those for the resummed results arise from our theory parameter scan method.)
In Fig. \[fig:M0norm\] we show the total hadronic cross section $\sigma$ from the fixed order $\alpha_s$ expansion (blue points with small uncertainties sitting on the horizontal line) and determined from the integral over the log-resummed distribution with/without renormalon subtractions (red triangles and green squares). Both expansions are displayed including fixed order corrections up to order $\alpha_s(m_Z)$, $\alpha_s^2(m_Z)$ and $\alpha_s^3(m_Z)$, as indicated by the orders 1, 2, 3, respectively. We immediately notice that the resummed result is not as effective in reproducing the total cross section as the fixed order expansion. Predictions that sum large logarithms have a substantial (perturbative) normalization uncertainty. On the other hand, as shown in Ref. [@Abbate:2010xh], the resummation of logarithms combined with the profile function approach leads to a description of the thrust spectrum that converges nicely over the whole physical $\tau$ range when the norm of the spectrum is divided out, a property not present in the spectrum of the fixed order expansion.
{width="48.00000%"} {width="48.00000%"} {width="48.00000%"} {width="48.00000%"}
In Fig. \[fig:Mnorm\] the expansions of the partonic moments $\hat M_1$, $\hat
M_2$, and $\hat M_3$ are displayed in the fixed order expansion (blue circles) and the log-resummed result with either the fixed order normalization (green squares) or a properly normalized spectrum (red triangles). We observe that the fixed order expansion has rather small variations from scale variation, but shows poor convergence indicating that its renormalization scale variation underestimates the perturbative uncertainty. For $\hat M_1$ the fixed order and log-resummed expressions with a common fixed-order normalization (blue circles and green squares) agree well at each order, indicating that, as expected, large logarithms do not play a significant role for this moment. On the other hand, the expansion based on the properly normalized log-resummed spectrum exhibits excellent convergence, and also has larger perturbative uncertainties at the lowest order. In particular, for the red triangles the higher order results are always within the 1-$\sigma$ uncertainties of the previous order. The result shows that using the normalized log-resummed spectrum for thrust, which converges nicely for all $\tau$, also leads to better convergence properties of the moments. At third order all the fixed order and resummed partonic moments are consistent with each other. Since the log-resummed moments exhibit more realistic estimates of perturbative uncertainties at each order, we will use the normalized resummed moments for our fit analysis.[^8]
![ Difference between theoretical predictions with default parameters for the first moment as function of $Q$ when varying one parameter at a time. The red solid line corresponds to varying $\Delta\alpha_s(m_Z)=\pm 0.001$ and the blue dashed lines to varying $\Delta\Omega_1=\pm 0.1$, with respect to the pure QCD best-fit values. There is a strong degeneracy of the two parameters in the region $Q>100$ GeV, which is obviously broken when considering values of $Q$ below $70$ GeV.[]{data-label="fig:degeneracy"}](figs/degeneracy_ac){width="1\linewidth"}
In Fig. \[fig:4-plots\] we show how the inclusion of various ingredients (fixed order contributions, log resummation, power corrections, renormalon subtraction) affects the convergence and uncertainty of our theoretical prediction for the first moment of the thrust distribution as a function of $Q$. From these plots we can observe four points: i) Fixed order perturbation theory does not converge very well. ii) Resummation of large logarithms in the distribution, when normalized with the integral of the resummed distribution, improves convergence for every center of mass energy. iii) The inclusion of power corrections has the effect of a $1/Q$-modulated vertical shift on the value of the first moment. iv) The subtraction of the renormalon ambiguity reduces the theoretical uncertainty. This picture for the first moment is consistent with the results of Ref. [@Abbate:2010xh] for the thrust distribution.
Another important element of our analysis is that we perform global fits, simultaneously using data at a wide range of center of mass energies $Q$. This is motivated by the fact that for each $Q$ there is a complete degeneracy between changing $\alpha_s(m_Z)$ and changing $\Omega_1$, which can be lifted only through a global analysis. Fig. \[fig:degeneracy\] shows the difference between the theoretical prediction of $M_1$ as a function of $Q$, when $\alpha_s(m_Z)$ or $\Omega_1$ are varied by $\pm\,0.001$ and $\pm\,0.1$ GeV, respectively. We see that the effect of a variation in $\alpha_s(m_Z)$ can be compensated with an appropriate variation in $\Omega_1$ at a given center of mass energy (or in a small $Q$ range). This degeneracy is broken if we perform a global fit including the wide range of $Q$ values shown in the figure.
Finally, in Fig. \[fig:alpha-evolution\] we show $\alpha_s(m_Z)$ extracted from fits to the first moment of the thrust distribution at three-loop accuracy including sequentially the different effects our code has implemented: O$(\alpha_s^3)$ fixed order, resummation, power corrections, renormalon subtraction, b-quark mass and QED. The error bars of the first two points at the left hand side do not contain an estimate of uncertainties associated with the power correction. Though smaller, the resummed result is compatible at the 1-$\sigma$ level with the fixed order result. The inclusion of the power correction is the element which has the greatest impact on $\alpha_s(m_Z)$; for the ${\overline{\textrm{MS}}}$ definition of $\Omega_1$ it reduces the central value by 7%. The subtraction of the renormalon ambiguity in the Rgap scheme reduces the theoretical uncertainty by a factor of 3, while b-quark mass and QED effects give negligible contributions with current uncertainties.
order $\alpha_s(m_Z)$ (with $\overline\Omega_1^{{\overline{\textrm{MS}}}}$) $\alpha_s(m_Z)$ (with $\Omega_1^{\rm Rgap}$)
----------------------- ----------------------------------------------------------------------- ----------------------------------------------
NLL $0.1173(82)(13)$ $0.1172(82)(13)$
NNLL $0.1159(41)(14)$ $0.1139(15)(13)$
(full) $0.1153(21)(14)$ $\mathbf{0.1140(07)(14)}$
$\!\!$[(QCD+$m_b$)]{} $0.1160(20)(14)$ $0.1146(07)(14)$
$\!\!$[(pure QCD)]{} $0.1156(21)(14)$ $0.1142(07)(14)$
: Central values for $\alpha_s(m_Z)$ at various orders with theory uncertainties from the parameter scan (first value in parentheses), and experimental and hadronic error added in quadrature (second value in parentheses). The bold value above the line is our final result, while values below the line show the effect of leaving out the QED and $b$-mass corrections.[]{data-label="tab:results"}
order $\overline\Omega_1$ (${\overline{\textrm{MS}}}$) \[GeV\] $\Omega_1$ (Rgap) \[GeV\]
----------------------- ---------------------------------------------------------- ---------------------------
NLL $0.504(157)(45)$ $0.500(153)(45)$
NNLL $0.405(82)(47)$ $0.413(43)(44)$
(full) $0.318(75)(49)$ $\mathbf{0.377(39)(44)}$
$\!\!$[(QCD+$m_b$)]{} $0.310(74)(49)$ $0.369(34)(44)$
$\!\!$[(pure QCD)]{} $0.350(67)(49)$ $0.402(35)(44)$
: Central values for $\Omega_1$ at the reference scales $R_\Delta=\mu_\Delta=2$ GeV and for $\overline\Omega_1$ and at various orders. The parentheses show theory uncertainties from the parameter scan, and experimental and hadronic uncertainty added in quadrature, respectively. The bold value above the line is our final result, while the values below the horizontal line show the effect of leaving out the QED and $b$-mass corrections.[]{data-label="tab:O1results"}
$\alpha_s(m_Z)$ $\chi^2/({\rm dof})$
----------------------------------------------------- ------------------ ----------------------
with $\Omega_1^{\rm Rgap}$ $0.1140(07)(14)$ $1.33$
with $\overline\Omega_1^{{\overline{\textrm{MS}}}}$ $0.1153(21)(14)$ $1.33$
no power corr. $0.1236(39)(03)$ $2.03$
$0.1305(39)(04)$ $2.52$
: Comparison of first moment fit results for analyses with full results and $\Omega_1=\Omega_1^{\rm Rgap}$, with $\overline\Omega_1$ and no renormalon subtractions, without power corrections, and at fixed order without power corrections or log resummation. The first number in parentheses corresponds to the theory uncertainty, whereas the second corresponds to the experimental and hadronic uncertainty added in quadrature for the first two rows, and experimental uncertainty for the last two rows. \[tab:nomodel\]
Uncertainty Analysis {#sec:erroranalysis}
--------------------
{width="48.50000%"} {width="48.50000%"}
![ Experimental $\Delta\chi^2=1$ standard error ellipse (dotted green) at N${}^3$LL accuracy with renormalon subtractions, in the $\alpha_s$-$2\,\Omega_1$ plane. The dashed blue ellipse represents the theory uncertainty which is obtained by fitting an ellipse to the contour of the distribution of the best-fit points. This ellipse should be interpreted as the $1$-$\sigma$ theory uncertainty for $1$-parameter (39% confidence for $2$-parameters). The solid red ellipse represents the total (combined experimental and perturbative) uncertainty ellipse.[]{data-label="fig:ellipses"}](figs/ellipses_ac){width="1\linewidth"}
In Fig. \[fig:M1alpha\] we show the result of our theory scan to determine the perturbative uncertainties. At each order we carried out 500 fits, with theory parameters randomly chosen in the ranges given in Table \[tab:theoryerr\] of App. \[app:scan\] (where further details may be found). The left panel of Fig. \[fig:M1alpha\] shows results with renormalon subtractions using the Rgap scheme for $\Omega_1$, and the right-panel shows results in the ${\overline{\textrm{MS}}}$ scheme without renormalon subtractions. Each point in the plot represents the result of a single fit. As described in App. \[app:scan\], in order to estimate perturbative uncertainties, we fit an ellipse to the contour of best-fit points in the $\alpha_s$-$2\,\Omega_1$ plane, and we interpret this as theoretical error ellipse. This is represented by the dashed lines in Fig. \[fig:M1alpha\]. The solid lines represent the combined (theoretical and experimental) standard error ellipses. These are obtained by adding the theoretical and experimental error matrices which determined the individual ellipses. The central values of the fits, collected in Tables \[tab:results\] and \[tab:O1results\], are determined from the average of the maximal and minimal values of the theory scan, and are very close to the central values obtained when running with our default parameters. The minimal $\chi^2$ values for these fits are quoted in Table \[tab:nomodel\] as well. The best fit based on our full code has $\chi^2/{\rm
dof}=1.325 \pm\,0.002 $ where the range incorporates the variation from the displayed scan points at N$^3$LL. The fit results show a substantial reduction of the theoretical uncertainties with increasing perturbative order. Removal of the $\mathcal O(\Lambda_{\rm QCD})$ renormalon improves the perturbative convergence and leads to a reduction of the theoretical uncertainties at the highest order by a factor of 2 in $\Omega_1$, and factor of 3 in $\alpha_s(m_Z)$
To analyze in detail the experimental and the total uncertainties of our results, we refer now to Fig. \[fig:ellipses\]. Here we show the error ellipses for our highest order fit, which includes resummation, power corrections, renormalon subtraction, QED and b-quark mass contributions. The green dotted, blue dashed, and the solid red lines represent the standard error ellipses for, respectively, experimental, theoretical, and combined theoretical and experimental uncertainties. The experimental and theory error ellipses are defined by $\Delta \chi^2=1$ since we are most interested in the 1-dimensional projection onto $\alpha_s$. The correlation matrix of the experimental, theory, and total error ellipses are ($i,j=\alpha_s, 2\,\Omega_1$) $$\begin{aligned}
\label{Vijresult}
V_{ij}& =\,
\left( \begin{array}{cc}
\sigma_{\alpha_s}^2
& \,\, 2 \sigma_{\alpha_s} \sigma_{\Omega_1}\rho_{\alpha\Omega}\\
2\sigma_{\alpha_s} \sigma_{\Omega_1}\rho_{\alpha\Omega}
& \,\,4 \sigma_{\Omega_1}^2
\end{array}\right) ,
\\
V^{\rm exp}_{ij}& =
\left( \begin{array}{cc}
1.93(15)\cdot 10^{-6} & \,\, -1.18(13)\cdot 10^{-4}\,\mbox{GeV}\\
-1.18(13)\cdot 10^{-4}\,\mbox{GeV} & \,\, 0.79(13)\cdot 10^{-2}\,\mbox{GeV}^2
\end{array}\right) \! , {\nonumber}\\
V^{\rm theo}_{ij}& =
\left( \begin{array}{cc}
5.56\cdot 10^{-7} & \,\, 1.85\cdot 10^{-5}~\mbox{GeV}\\
1.85\cdot 10^{-5}~\mbox{GeV} & \,\, 5.82\cdot 10^{-3}~\mbox{GeV}^2
\end{array}\right), {\nonumber}\\
V^{\rm tot}_{ij}& =
\left( \begin{array}{cc}
2.49(15)\cdot 10^{-6} & \,\, -0.99(13)\cdot 10^{-4}\,\mbox{GeV}\\
-0.99(13)\cdot 10^{-4}\,\mbox{GeV} & \,\, 1.37(13)\cdot 10^{-2}\,\mbox{GeV}^2
\end{array}\right)\! , {\nonumber}\end{aligned}$$
where the experimental correlation coefficient is significant and reads $$\begin{aligned}
\label{eq:rhoaO}
\rho^{\rm exp}_{\alpha\Omega}\,=\,-\,0.96(14) \,.\end{aligned}$$ Adding the theory scan uncertainties reduces the correlation coefficient in [Eq. ]{} to $$\begin{aligned}
\label{eq:rhoaOtot}
\rho_{\alpha\Omega}^{\rm total} \,=\, -\,0.54(8).\end{aligned}$$ In both [Eqs. and ]{} the numbers in parentheses capture the range of values obtained from the theory scan. From $V_{ij}^{\rm exp}$ in Eq. (\[Vijresult\]) it is possible to extract the experimental uncertainty for $\alpha_s$ and $\Omega_1$ and the uncertainty due to variations of $\Omega_1$ and $\alpha_s$, respectively: $$\begin{aligned}
\sigma_{\alpha_s}^{\rm exp}
& = \,\sigma_{\alpha_s}\,\sqrt{1-\rho^2_{\alpha\Omega}}
= \,0.0004 \,,
\\
\sigma_{\Omega_1}^{\rm exp}
& = \,\sigma_{\Omega_1}\,\sqrt{1-\rho^2_{\alpha\Omega}}
= \,0.013~\mbox{GeV} \,,
\nonumber\\
\sigma_{\alpha_s}^{\rm \Omega_1}
& = \,\sigma_{\alpha_s}\, |\rho_{\alpha\Omega}|\,
= \,0.0014 \,,
\nonumber\\
\sigma_{\Omega_1}^{\rm \alpha_s}
& = \,\sigma_{\Omega_1}\, |\rho_{\alpha\Omega}|\,
= \,0.044~\mbox{GeV}
\,.\nonumber\end{aligned}$$ Fig. \[fig:ellipses\] shows the total uncertainty in our final result quoted in [Eq. ]{} below.
The correlation exhibited by the green dotted experimental error ellipse in Fig. \[fig:ellipses\] is given by the line describing the semimajor axis $$\begin{aligned}
\frac{\Omega_1}{32.82\,{\rm GeV}} = 0.1255 - \alpha_s(m_Z) \,.\end{aligned}$$ Note that extrapolating this correlation to the extreme case where we neglect the nonperturbative corrections ($\Omega_1=0$) gives $\alpha_s(m_Z)\to 0.1255$.
Effects of QED and the $b$-mass {#subsec:QED}
-------------------------------
The experimental correction procedures applied to the AMY, JADE, SLC, DELPHI and OPAL data sets were typically designed to eliminate initial state photon radiation, while those of the TASSO, L3 and ALEPH collaborations eliminated initial and final state photon radiation. It is straightforward to test for the effect of these differences in the fits by using our theory code with QED effects turned on or off depending on the data set. Using our order code in the Rgap scheme we obtain the central values $\alpha_s(m_Z)=0.1143$ and $\Omega_1=0.376$ GeV. Comparing to our default results given in Tabs. \[tab:results\] and \[tab:O1results\], which are based on the theory code were QED effects are included for all data sets, we see that the central value for $\alpha_s$ is larger by $0.0003$ and the one for $\Omega_1$ is smaller by $0.001$ GeV. This shift is substantially smaller than our perturbative uncertainty. Hence our choice to use the theory code with QED effects included everywhere as the default for our analysis does not cause an observable bias regarding experiments which remove final state photons.
By comparing the N$^3$LL (pure massless QCD) and N$^3$LL (QCD $+\,m_b$) entries in Tabs. \[tab:results\] and \[tab:O1results\] we see that including finite $b$-mass corrections causes a very mild shift of $\simeq +0.0004$ to $\alpha_s(m_Z)$, and a somewhat larger shift of $\simeq -0.033\,{\rm GeV}$ to $\Omega_1$. In both cases these shifts are within the 1-$\sigma$ theory uncertainties. In the N$^3$LL (pure massless QCD) analysis the $b$-quark is treated as a massless flavor, hence this analysis differs from that done by JADE [@Pahl:2008uc] where primary $b$ quarks were removed using MC generators.
Final Results {#subsec:finalresult}
-------------
![ First moment of the thrust distribution as a function of the center of mass energy $Q$, using the best-fit values for $\alpha_s(m_Z)$ and $\Omega_1$ in the Rgap scheme as given in Eq. (\[eq:asO1finalcor\]). The blue band represents the perturbative uncertainty determined by our theory scan. Data is from ALEPH, OPAL, L3, DELPHI, JADE, AMY and TASSO. []{data-label="fig:theovsexp"}](figs/theovsexp_ac){width="1\linewidth"}
As our final result for $\alpha_s(m_Z)$ and $\Omega_1$, obtained at order in the Rgap scheme for $\Omega_1(R_\Delta,\mu_\Delta)$, including bottom quark mass and QED corrections we obtain $$\begin{aligned}
\label{eq:asO1finalcor}
\alpha_s(m_Z) & \, = \,
0.1140 \,\pm\, (0.0004)_{\rm exp}
\\[2mm] & \,\pm\, (0.0013)_{\rm hadr} \,\pm \, (0.0007)_{\rm pert},
\nonumber\\[4mm]
\Omega_1(R_\Delta,\mu_\Delta) & \, = \,
0.377 \,\pm\, (0.013)_{\rm exp}
\nonumber\\[2mm] & \,\pm\, (0.042)_{\rm \alpha_s(m_Z)}
\,\pm \, (0.039)_{\rm pert}~\mbox{GeV},
\nonumber\end{aligned}$$ where $R_\Delta=\mu_\Delta=2$ GeV and we quote individual uncertainties for each parameter. Here $\chi^2/\rm{dof}=1.33$. Eq. (\[eq:asO1finalcor\]) is the main result of this work.
In Fig. \[fig:theovsexp\] we show the first moment of the thrust distribution as a function of the center of mass energy $Q$, including QED and $m_b$ corrections. We use here the best-fit values given in Eq. (\[eq:asO1finalcor\]). The band displays the theoretical uncertainty and has been determined with a scan on the parameters included in our theory, as explained in App. \[app:scan\]. The fit result is shown in comparison with data from ALEPH, OPAL, L3, DELPHI, JADE, AMY and TASSO. Good agreement is observed for all $Q$ values.
![Comparison of $\alpha_s(m_Z)$ and $\Omega_1$ determinations from thrust first moment data (red upper right ellipses) and thrust tail data (blue lower left ellipses). The plot corresponds to fits with N${}^3$LL accuracy and in the Rgap scheme. The tail fits are performed with our improved code which uses a new nonsingular two-loop function, and the now known two-loop soft function. Dashed lines correspond to theory uncertainties, solid lines correspond to $\Delta\chi^2=1$ combined theoretical and experimental error ellipses, and wide-dashed lines correspond to $\Delta\chi^2=2.3$ combined error ellipses (corresponding to 1-$\sigma$ uncertainty in two dimensions).[]{data-label="fig:moment-tail-comparison"}](figs/moment-tail-comparison_ac){width="1\linewidth"}
It is interesting to compare the result of this analysis with the result of our earlier fit of thrust tail distributions in Ref. [@Abbate:2010xh]. This is shown in Fig. \[fig:moment-tail-comparison\]. Here the red upper shaded area and corresponding ellipses show the results from fits to the first moment of the thrust distribution, while the blue lower shaded area and ellipses show the result from fits of its tail region. Both analyses show the theory (dashed lines) and combined theoretical and experimental (solid lines) standard error ellipses, as well as the ellipses which correspond to (68% CL for a two-parameter fit, wide-dashed lines). We see that the two analyses are compatible.
Fixed Order Analysis of $M_1$ {#sec:FO}
=============================
It is interesting to compare the result of our best fit with an analysis where we do not perform resummation in the thrust distribution, but where power corrections and renormalon subtractions are still considered. This is achieved by setting the scales $\mu_H$, $\mu_S$, $\mu_J$, $\mu_{\rm ns}$ in our theoretical prediction all to a common scale $\mu\sim Q$. We use $R$ for the scale of the renormalon subtractions and renormalization group evolved power correction. Finally we will neglect QED and $b$-mass corrections in this subsection. Up to the treatment of power corrections and perturbative subtractions, the fixed order results used for this analysis are thus equivalent to those used in Ref. [@Gehrmann:2009eh].
The OPE formula for the first moment in the Rgap scheme for this situation is given by $$\begin{aligned}
\label{eq:GAP-OPE}
M_1&= {\hat M}_1^{\rm Rgap}(R,\mu)
+\dfrac{2\,\Omega_1(R,\mu)}{Q}\,,\\
\Omega_1(R,\mu)&=\Omega_1
+\bar{\Delta}(R,\mu)-\bar{\Delta}(R_\Delta,\mu_\Delta)\,,\nonumber\end{aligned}$$ In Eq. (\[eq:GAP-OPE\]), the $\Omega_1$ with no arguments is the value determined by the fits, which is in the Rgap scheme at the reference scale $\mu_\Delta=R_\Delta=2\,{\rm GeV}$. Here $\bar{\Delta}(R,\mu)$ is the running gap parameter, and $\bar{\Delta}(R,\mu)-\bar{\Delta}(R_\Delta,\mu_\Delta)$ is used to sum logarithms from $(R_\Delta,\mu_\Delta)$ to $(R,\mu)$ in [Eq. ]{}. The analytic expression for $\bar{\Delta}(R,\mu)-\bar{\Delta}(R_\Delta,\mu_\Delta)$ can be found in Eq. (41) of Ref. [@Abbate:2010xh] (see also [@Hoang:2008fs]). The perturbative ${\hat M}_1^{\rm Rgap}$ is related to the perturbative ${\overline{\textrm{MS}}}$ result by $$\begin{aligned}
\label{eq:MhatRgap}
{\hat M}_1^{\rm Rgap}(R,\mu)
&={\hat M}_1^{\rm\overline{MS}}(\mu)+\dfrac{2\,\delta(R,\mu)}{Q}
\,,\\
\delta(R,\mu)&=e^{\gamma_E}R \sum_{i=1}^3\alpha_s(\mu)^i\delta_i(R,\mu)\,,
\nonumber\end{aligned}$$ where the subtractions terms are [@Hoang:2008fs; @Abbate:2010xh] $$\begin{aligned}
\label{eq:d123}
\delta_1(R,\mu) &= -0.848826 L_R \,, \\
\delta_2(R,\mu) &= -0.156279 - 0.46663 L_R
- 0.517864 L_R^2 \,, {\nonumber}\\
\delta_3(R,\mu)&= -\,0.552986
- 0.622467 L_R - 0.777219 L_R^2
\notag\\&\quad - 0.421261 L_R^3
\,,{\nonumber}\end{aligned}$$ with $L_R=\ln(\mu/R)$. In [Eq. ]{} $\delta(R,\mu)$ cancels the ${\cal
O}(\Lambda_{\rm QCD})$ renormalon in ${\hat M}_1^{\rm\overline{MS}}(\mu)$, and it is crucial that the coupling expansions in both these objects are done at the same scale, $\alpha_s(\mu)$, for this cancellation to take place. The relation to the ${\overline{\textrm{MS}}}$ scheme power correction is $\overline \Omega_1 = \Omega_1 +
\delta(R_\Delta,\mu_\Delta)$, and the OPE in the ${\overline{\textrm{MS}}}$ scheme at this level is $$\begin{aligned}
\label{eq:MSbar-OPE}
M_1&= {\hat M}_1^{\overline{\rm MS}}(\mu)
+\dfrac{2\,{\overline \Omega}_1}{Q}\,.\end{aligned}$$ In the ${\overline{\textrm{MS}}}$ result there are no perturbative renormalon subtractions (and thus no log resummation related to the renormalon subtractions) and the parameter $\overline\Omega_1$ has a $\Lambda_{\rm QCD}$ renormalon ambiguity.
We will perform fits to the experimental data following the same procedure discussed in the previous section. Using Eq. (\[eq:GAP-OPE\]) we consider two cases, i) $R\sim Q$ where $\Omega_1$ is renormalization group evolved to $R$ and there are no large logarithms in the renormalon subtractions, and ii) fixing $R$ at the reference scale, $R=2\,{\rm GeV}$, in which case large logarithms are present in the renormalon subtractions. We will also consider a third case, iii), using the ${\overline{\textrm{MS}}}$-OPE of [Eq. ]{}.
order ${\cal O}(\alpha_s^2)$ ${\cal O}(\alpha_s^3)$
-------------------------------------------------------------- ------------------------ ------------------------
\(i) Rgap R-RGE $0.1159(27)(14)$ $0.1146(06)(14)$
\(ii) Rgap FO Subt. $0.1185(63)(15)$ $0.1138(20)(14)$
\(iii) ${\overline{\textrm{MS}}}$ for ${\overline \Omega}_1$ $0.1278(124)(19)$ $0.1186(38)(14)$
: ${\overline{\textrm{MS}}}$ scheme values for $\alpha_s(m_Z)$ obtained from various fixed order analyses. The first value in parentheses is the uncertainty from higher order perturbative corrections (obtained by the method described in the text), while the second value is the combined experimental and hadronization uncertainty.[]{data-label="tab:FOresults"}
order ${\cal O}(\alpha_s^2)$ ${\cal O}(\alpha_s^3)$
-------------------------------------------------------------- ------------------------ ------------------------
\(i) Rgap R-RGE $0.407(8)(45)$ $0.400(8)(45)$
\(ii) Rgap FO Subt. $0.216(126)(133)$ $0.359(42)(62)$
\(iii) ${\overline{\textrm{MS}}}$ for ${\overline \Omega}_1$ $0.388(62)(47)$ $0.350(54)(44)$
: $\Omega_1$ or $\overline\Omega_1$ values obtained from fixed order analyses at various orders. The first value in parentheses is the uncertainty from higher order perturbative corrections (obtained by the method described in the text), while the second value is the combined experimental and hadronization uncertainty.[]{data-label="tab:FOO1results"}
Results for these fits are shown in Tabs. \[tab:FOresults\] and \[tab:FOO1results\]. For all cases $\chi^2/\rm{dof} \simeq 1.32$.
For case i) we take $R\sim \mu\sim Q$, so there are no large logarithms in the $\delta(R,\mu)$ of [Eq. ]{}, and all large logarithms associated with renormalon subtractions are summed in $\bar\Delta(R,\mu)-\bar\Delta(R_\Delta,\mu_\Delta)$. Here we estimate the perturbative uncertainty in $\alpha_s(m_Z)$ and $\Omega_1$ by varying the renormalization scale $\mu$ and the scale $R$ independently in the range $\{2\,Q,Q/2\}$. We use one-half the maximum minus minimum variation as the uncertainty, and the average for the central value. The results for both $\alpha_s(m_Z)$ and $\Omega_1$ are fully compatible at 1-$\sigma$ to our final results shown in [Eq. ]{}. The agreement is even closer to the central values for the fits without QED or $b$-mass corrections in Tabs. \[tab:results\] and \[tab:O1results\], namely $\alpha_s(m_Z)=0.1142(07)(14)$ and $\Omega_1=0.402(35)(44)$. The one difference is that the perturbative uncertainty for $\Omega_1$ in Tab. \[tab:FOO1results\] is a factor of three smaller. The case i) results in the table also exhibit nice order-by-order convergence, and if one plots $M_1$ versus $Q$ (analogous to Fig. \[fig:Mnorm\]) the uncertainty bands are entirely contained within one another. In order to be conservative, we take our resummation analysis in [Eq. ]{} as our final results (with its larger perturbative uncertainty and inclusion of QED and $b$-mass corrections).
For case ii) we take $R\sim 2\,{\rm GeV}$ and $\mu\sim Q$ as typical values, so there are large logarithms, $\ln(R/Q)$, in the $\delta(R,\mu)$ renormalon subtractions. The central value for $\alpha_s(m_Z)$ at ${\cal O}(\alpha_s^3)$ is again fully compatible with that in [Eq. ]{}. Here we estimate the perturbative uncertainty in $\alpha_s(m_Z)$ by varying $\mu\in \{2\,Q,Q/2\}$ and $R=2\pm 1\,{\rm GeV}$. Due to the large logarithms the perturbative uncertainty in $\alpha_s(m_Z)$ for case ii), shown in Tab. \[tab:FOresults\], is three times larger than for case i). It is also compatible with the difference between central values at ${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3)$. To estimate the uncertainty for $\Omega_1$ we only vary $\mu$, which leads to the rather large error estimate for $\Omega_1$ shown in Tab. \[tab:FOO1results\]. The contrast between the precision of the results in case i), to the results in case ii), illustrates the importance of summing large logarithms in the renormalon subtractions.
For case iii), where the ${\overline\Omega}_1$ power correction is defined in ${\overline{\textrm{MS}}}$ we do not have renormalon subtractions (and hence no large logs in subtractions). Due to the poor convergence of the fixed order prediction for the first moment, seen from the blue fixed order points in Fig. \[fig:Mnorm\], it is not clear whether varying $\mu$ in the range $\{2\,Q,Q/2\}$ gives a realistic perturbative uncertainty estimate. Hence we determine the perturbative uncertainty for case iii) in Tabs. \[tab:FOresults\] and \[tab:FOO1results\] by varying $\mu$ in the range $\{2\,Q,Q/2\}$ and multiply the result by a factor of two. The perturbative uncertainties for $\alpha_s(m_Z)$ are a factor of two larger than in case ii). The central values for $\alpha_s(m_Z)$ in case iii) are also larger, but are compatible with those in case ii) and [Eq. ]{} within 1-$\sigma$.
It is interesting to compare our results to those of Ref. [@Gehrmann:2009eh], which also performs a fixed order analysis at ${\cal O}(\alpha_s^3)$, and incorporates subtractions based on the dispersive model.[^9] Here the subtractions contain logarithms, $\ln(\mu_I/\mu)$, where $\mu_I\sim 2\,{\rm GeV}$ and $\mu\sim Q$, that are not resummed. From a fit to $M_1$ in thrust they obtained $\alpha_s(m_Z)\,=\,0.1166\,\pm\,0.0015_{\rm exp}\,\pm\,0.0032_{\rm th}$ where the first uncertainty is experimental and the second is theoretical. Our corresponding result is the one in case ii), and the central values and uncertainties for $\alpha_s(m_Z)$ are fully compatible. The perturbative uncertainty they obtain is a factor of $1.6$ larger than ours. It arises from varying the renormalization scale $\mu\in \{2\,Q,Q/2\}$, the ${\cal
O}(\alpha_s^2)$ Milan factor $\cal M$ by 20%, and the infrared scale $\mu_I=2\pm 1\,{\rm GeV}$ in the dispersive model. In our analysis there is no precise analog of the Milan factor because our subtractions and Rgap scheme for $\Omega_1$ fully account for two and three gluon infrared effects up to ${\cal
O}(\alpha_s^3)$ that are associated to thrust. Other than this, the difference can be simply attributed to the differences in subtraction schemes which have an impact on the $\mu$ scale uncertainty. Finally, note that we have implemented the analytic results of Ref. [@Gehrmann:2009eh] and confirmed their $\mu$ and $\mu_I$ uncertainties.
JADE Datasets {#sec:dataset}
=============
As discussed in Sec. \[sec:intro\] our global dataset includes thrust moment results from ALEPH, OPAL, L3, DELPHI, AMY, TASSO and the JADE data from Ref. [@MovillaFernandez:1997fr]. In this section we discuss the impact on the results in Secs. \[sec:results\] and \[sec:FO\] of replacing the JADE data from Ref. [@MovillaFernandez:1997fr] with moment results from an updated analysis carried out in Ref. [@Pahl:2008uc], which removes the contributions from primary $b\bar b$ pair production and provides in addition measurements at $Q=14$ and $22$ GeV. In Fig. \[fig:JADE-data\] we show the data for $M_1$, including the JADE results from Refs. [@MovillaFernandez:1997fr] and [@Pahl:2008uc]. The most significant difference occurs at $Q=44\,{\rm GeV}$. Our analysis will treat these datasets on the same footing without attempting to account for the effect of removing the $b\bar b$’s.
For our analysis here, with theory results at N$^3$LL+${\cal
O}(\alpha_s^3)$, we continue to exclude center of mass energies $Q \le 22$ GeV as in Sec. \[sec:results\]. The dependence of the global fit result on the data set for $M_1$ is shown in Fig. \[fig:JADE-effect\]. Theoretical uncertainties are analyzed again by the scan method giving the central dots and three inner ellipses, while the outer three ellipses show the respective combined 1-$\sigma$ total experimental and theoretical uncertainties. Using all experimental data but excluding JADE measurements entirely gives the fit result shown by the upper blue ellipse. This result is compatible at 1-$\sigma$ with the central red ellipse which shows our default analysis, using the Ref. [@MovillaFernandez:1997fr] JADE $M_1$ measurements. Replacing these two JADE data points by the four $Q>22\,{\rm GeV}$ JADE $M_1$ results from Ref. [@Pahl:2008uc] yields the lower green ellipse (whose center is $\simeq
1.5$-$\sigma$ from the central ellipse). For this fit the $\chi^2/{\rm dof}$ increases from $1.33$ to $1.52$ demonstrating that there is less compatibility between the data. For this reason, together with the concern about the impact of removing primary $b\bar b$ events with MC simulations, we have used only JADE data from Ref. [@MovillaFernandez:1997fr] in our main analysis.
A similar pattern is observed using the fixed order fits of $M_1$ discussed in Sec. \[sec:FO\]. In this case it is also straightforward to include the $Q=14,22\,{\rm GeV}$ JADE data from Ref. [@Pahl:2008uc]. If these two points are added to our default dataset (which contains $Q=35$ and $45$ GeV as the lowest $Q$ results for $M_1$) then we find $\alpha_s(m_Z)=0.1155\pm0.0012$ and $\Omega_1=0.361\pm0.035\,{\rm GeV}$ with $\chi^2/{\rm dof}=1.3$. This is compatible at 1-$\sigma$ with our final pure QCD result in Tab. \[tab:results\]. If we include the entire set of JADE data from Ref. [@Pahl:2008uc] instead of those from Ref. [@MovillaFernandez:1997fr] then we find $\alpha_s(m_Z)=0.1166\pm 0.0012$ and $\Omega_1=0.306\pm 0.033\,{\rm GeV}$ with $\chi^2/{\rm dof}=1.6$, very similar to the values observed for the green lower ellipse in Fig. \[fig:JADE-effect\]. Hence, overall the fixed order analysis does not change the comparison of fits with the two different JADE datasets.
![Experimental data for the first moment of thrust. The solid line corresponds to the result from the first row of Tab. \[tab:FOresults\], and uses a fixed order code with power corrections in a renormalon-free scheme, but no resummation (neither QED nor bottom mass corrections).[]{data-label="fig:JADE-data"}](figs/new-old-JADE-data_ac){width="1\linewidth"}
![Fit results when using ALEPH, DELPHI, OPAL, L3, AMY, TASSO, but no JADE data (upper blue ellipse), when also including JADE data from Ref. [@MovillaFernandez:1997fr] (red central ellipse) \[our default data set\], and when instead including the JADE data from Ref. [@Pahl:2008uc] (green lower ellipse). The ellipses here correspond to $1$-$\sigma$ for two parameters (68% CL).[]{data-label="fig:JADE-effect"}](figs/JADE-effect-fit_ac){width="1\linewidth"}
Higher Moment Analysis {#sec:comparison}
======================
In this section we consider higher moments, $M_{n\ge 2}$, which have been measured experimentally up to $n=5$. From [Eq. ]{} we see that these moments have power corrections $\propto 1/Q^k$ for $k\ge 1$. Since for the perturbative moments we have $\hat M_{n}/\hat M_{n+1} \simeq 4$–$9$, we estimate that the $1/Q^2$ power corrections are suppressed by $9\Lambda_{\rm QCD}/Q$ which varies from $1/8$ to $1/44$ for the $Q$-values in our dataset, $Q\ge 35\,{\rm GeV}$. Hence, for the analysis in this section we can safely drop the $1/Q^2$ and higher power corrections and use the form $$\begin{aligned}
M_n &= \hat M_n+ \frac{2\,n\,\Omega_1}{Q}\,\hat M_{n-1} \,.\end{aligned}$$
By using our fit results for $\alpha_s(m_Z)$ and $\Omega_1$ from [Eq. ]{} we can directly make predictions for the moments $M_{2,3,4,5}$. This tests how well the theory does at calculating the perturbative contributions $\hat M_{2,3,4,5}$. The results for these moments are shown in Fig. \[fig:mnlogplot\] and correspond to $\chi^2/{\rm dof} = 1.3,
2.5, 0.8, 1.1$ for $n=2,3,4,5$ respectively, indicating that our formalism does quite well at reproducing these moments. The larger $\chi^2/{\rm dof}$ for $n=3$ is related to a quite significant spread in the experimental data for this moment at $Q\gtrsim 190\,{\rm GeV}$. Note that we also see that the relation $M_{n}/M_{n+1} \simeq 4$–$9$ is satisfied by the experimental moments.
![Predictions for the higher moments $M_2$, $M_3$, $M_4$, $M_5$ using the best fit values from [Eq. ]{}, and our full N$^3$LL+${\cal
O}(\alpha_s^3)$ code in the Rgap scheme, but with QED and mass effects turned off. The central points use different symbols for different moments. []{data-label="fig:mnlogplot"}](figs/mnlogplot_ac){width="1\linewidth"}
An alternate way to test the higher moments is to perform a fit to this data. Since we have excluded the new JADE data in Ref. [@Pahl:2008uc], we do not have a significant dataset at smaller $Q$ values for the higher moments. With our higher moment dataset the degeneracy between $\alpha_s(m_Z)$ and $\Omega_1$ is not broken for $n\ge 2$, and one finds very large experimental errors for a two-parameter fit already at $n=2$. However we can still fit for $\alpha_s(m_Z)$ from data for each individual $M_{n\ge 2}$ by fixing the value of $\Omega_1$ to the best fit value in [Eq. ]{} from our fit to $M_1$. For this exercise we use our full N$^3$LL+${\cal O}(\alpha_s^3)$ code, but with QED and mass effects turned off. The outcome is shown in Fig. \[fig:higher-fits\] and Tab. \[tab:higher-fits\]. We find only a little dependence of $\alpha_s$ on $n$, and all values are compatible with the fit to the first moment within less than 1-$\sigma$. This again confirms that our value for $\Omega_1$ and perturbative predictions for $\hat M_{n\ge 2}$ are consistent with the higher moment data.
$n$ $\alpha_{s}(m_{Z})$ $\Delta_{{\rm th}}[\alpha_{s}]$ $\Delta_{{\rm exp}}[\alpha_{s}]$ ${\chi^{2}}/{{\rm dof}}$
----- --------------------- --------------------------------- ---------------------------------- --------------------------
$2$ $0.1149$ $0.0009$ $0.0005$ $1.24$
$3$ $0.1157$ $0.0009$ $0.0005$ $1.87$
$4$ $0.1151$ $0.0011$ $0.0010$ $0.39$
$5$ $0.1156$ $0.0015$ $0.0010$ $0.23$
: Numerical results for $\alpha_s$ from one-parameter fits to the $M_{n}$ moments. The second column gives the central values for $\alpha_s(m_Z)$, the third and fourth show the theoretical and experimental errors, respectively. Since $\Omega_1$ was fixed for this analysis we do not quote a hadronization error. \[tab:higher-fits\]
![One-parameter fits for $\alpha_s(m_Z)$ to the first five moments. We use our full set up with power corrections and renormalon subtractions, but with QED and mass corrections turned off. The value of $\Omega_1$ is fixed from [Eq. ]{}. The error bars include theoretical and experimental errors added in quadrature (not including uncertainty in $\Omega_1$).[]{data-label="fig:higher-fits"}](figs/higher-fits_ac){width="0.7\linewidth"}
In Ref. [@Gehrmann:2009eh] a two-parameter fit to higher thrust moments was carried out using OPAL data and the latest low energy JADE data. For $n=2$ to $n=5$ the results increase linearly from $\alpha_s(m_Z)=0.1202\pm (0.0018)_{\rm
exp}\pm (0.0046)_{\rm th}$ to $\alpha_s(m_Z)=0.1294\pm (0.0027)_{\rm exp}\pm
(0.0070)_{\rm th}$ respectively, and the weighted average for the first five moments of thrust is $\alpha_s(m_Z)\,=\,0.1208\,\pm\,0.0018_{\rm exp}\,\pm\,
0.0045_{\rm th}$. The results are fully compatible within the uncertainties, and there is an indication of a trend towards larger $\alpha_s(m_Z)$ extracted from higher moments. In our analysis we do not observe this trend, but our results should not be directly compared since we have only performed a one parameter fit. After further averaging over results obtained from event shapes other than thrust Ref. [@Gehrmann:2009eh] obtained as their final result $\alpha_s(m_Z)\,=\,0.1153\,\pm\,0.0017_{\rm exp}\,\pm\,0.0023_{\rm th}$. This is again perfectly compatible with our result in [Eq. ]{}.
Higher power corrections from Cumulant Moments {#sec:power-data}
==============================================
In this section we use cumulant moments as defined in Eq. (\[eq:OPE-subleading\]) to discuss the presence of higher power corrections and their constraints from experimental data. There are two types of power corrections that are relevant for the cumulants, those defined rigorously by QCD matrix elements which come from the leading thrust factorization theorem, $\Omega_n^\prime$, and those from our simple parameterization of higher order power corrections in Eq. (\[eq:sublead\_fact\_thm\]), $\Omega_{n,j\ge 1}$. For the latter a systematic matching onto QCD matrix elements has not been carried out and the corresponding perturbative coefficients have not been determined.
For the second cumulant $M_2'$ both types of power correction contribute to the leading $1/Q^2$ term in the combination $$\begin{aligned}
\label{eq:tO2}
\tilde \Omega_2^\prime &= \Omega_2^\prime + {\overline M}_{1,1}\, \Omega_{1,1}
\,.\end{aligned}$$ Without a calculation of the perturbative coefficient ${\overline M}_{1,1}$ we cannot argue that either one dominates, and hence we keep both of them. In terms of this parameter the OPE with its leading power correction for the second cumulant becomes simply $$\begin{aligned}
\label{eq:M2pOPE}
M_2' = \hat M_2' + \frac{4\,\tilde\Omega_2^\prime}{Q^2} \,,\end{aligned}$$ where $\hat M_2'$ is computed from our leading order factorization theorem, see [Eq. ]{}. For the third cumulant $M_3'$ the power correction from the leading thrust factorization theorem is $1/Q^3$, while that from the subleading factorization theorem is $1/Q^2$, so $$\begin{aligned}
\label{eq:M3pOPE}
M_3' = \hat M_3'
+ \frac{6\,{\overline M}_{2,1}\,\Omega_{1,1}}{Q^2}
+ \frac{8\,\Omega_3^\prime}{Q^3} \,.\end{aligned}$$ where we keep both of these power corrections.
For our analysis we assume that the perturbative coefficients ${\overline M}_{1,1}$ and ${\overline
M}_{2,1}$ get contributions at tree-level, and hence that their logarithmic dependence on $Q$ is $\alpha_s$-suppressed. Thus for fits to $M_2'$ and $M_3'$ we consider the three parameters $\tilde\Omega_2^\prime$, ${\overline
M}_{2,1}\,\Omega_{1,1}$, and $\Omega_3^\prime$. Our theoretical expectations are that $(\Omega_n^\prime)^{1/n} \sim \Lambda_{\rm QCD}$ and $(\Omega_{1,1})^{1/2} \sim (\Omega_n^\prime)^{1/n}$.
![Prediction of cumulants using our best-fit values for $\alpha_s(m_Z)$ and $\Omega_1$ from the fit to the first thrust moment. The band includes only the theoretical uncertainty from the random scan. The theory prediction includes QED and mass corrections, in contrast to our numerical analysis which has no QED and $b$-mass effects and uses our default model, which translates into the following values for higher nonperturbative power corrections: $\Omega_2^\prime=\Omega_1^2/4$, $\Omega_3^\prime=\Omega_1^3/8$, $\Omega_4^\prime=3\,\Omega_1^4/32$, $\Omega_5^\prime=3\,\Omega_1^5/32$.[]{data-label="fig:cumulant-moments-prediction"}](figs/mnplogplot_ac){width="1\linewidth"}
Since most of the experimental collaborations provide measurements only for moments we computed the cumulants using Eq. (\[eq:variance-skewness\]). To propagate the errors to the $n$-th cumulant one needs the correlations between the first $n$ moments, both statistical and systematical. Following experimental procedures we estimate the statistical correlation matrix from Monte Carlo simulations. These matrices are provided in Ref. [@Pahl:thesis] for $Q=14,\,91.3,\,206.6\,$GeV.[^10] The computation of these matrices does not depend on the simulation of the detector and hence can be a priory employed on the data provided by any experimental collaboration. It was found that statistical correlation matrices depend very mildly on the center of mass energy, and our approach is to use the matrix computed at $14\,$GeV for $Q<60\,$GeV, the one computed at $91.3$ for $60\,{\rm GeV}\leq Q<120\,{\rm GeV}$ and the one at $206.6\,$GeV for $Q\geq 120\,$GeV. The systematic correlation matrix for the moments is estimated using the minimal overlap model based on the systematic uncertainties, and then converted to uncertainties for the cumulants. We use this method even for the few cases in which experimental collaborations provide uncertainties for the cumulants directly, since we want to treat all data on the same footing. In these cases we have checked that the results are very similar.
To some extent the prescription we employ lies in between two extreme situations: a) moments are completely uncorrelated, and b) cumulants are completely uncorrelated. Situation a) corresponds to the naive assumption that the moments are independent. Situation b) is motivated by considering that properties like the location of the peak of the distribution ($\sim M_1$), the width of the peak ($\sim M_2'$), etc. are independent pieces of information. By assuming moments are uncorrelated one overestimates the errors of the cumulants. This would translate into larger experimental errors for our fit results and very small $\chi^2/{\rm
dof}$. Assuming that cumulants are uncorrelated induces very strong positive correlations between moments, which then leads to small uncertainties for the cumulants, especially for the variance, and larger $\chi^2/{\rm dof}$ values. With the adopted prescription we use one finds a weaker positive correlation among moments, which translates into a situation between these two extremes.[^11]
----------------------------------------------------------------------------------------------------------------------------
central $\Delta_{{\rm th}}$ $\Delta_{{\rm exp}}$ $\frac{\chi^{2}}{{\rm
dof}}$
---------------------------------- -------------------- --------------------- ---------------------- -----------------------
$(\tilde \Omega^\prime_2)^{1/2}$ $0.74\phantom{0}$ $0.09\phantom{0}$ $0.11\phantom{0}$ $0.72$
$(\Theta_2)^{1/2}$ $1.21\phantom{0}$ $0.10\phantom{0}$ $0.22\phantom{0}$
$(\Theta_3)^{1/3}$ $-2.61\phantom{0}$ $0.15\phantom{0}$ $1.51\phantom{0}$
----------------------------------------------------------------------------------------------------------------------------
: Determination of power corrections from fits to $M_2^\prime$ and $M_3^\prime$. All values in the table are in GeV. Columns two to four correspond to central value, theoretical uncertainty, and experimental uncertainty, respectively (the latter includes both statistical and systematic errors added in quadrature). The values displayed correspond to the linear combinations in Eq. (\[eq:linear-combinations\]), which for $M_3^\prime$ diagonalize the experimental error matrix. \[tab:omega-fits\]
{width="48.00000%"} {width="48.00000%"} {width="48.00000%"} {width="48.00000%"}
For our analysis we use our highest order code as described in [Sec. \[sec:results\]]{}, and take the value $\alpha_s(m_Z)=0.1142$ obtained in our fit to the first moment data with this code (see Tab. \[tab:results\]). Since we are analyzing cumulants $M_{n\ge 2}^\prime$ the value of $\Omega_1$ is not required, and there is no distinction between having this parameter in ${\overline{\textrm{MS}}}$ or the Rgap scheme. Hence in order to fit for higher power corrections we use our purely perturbative code in the ${\overline{\textrm{MS}}}$ scheme. Thus all of the power correction parameters extracted in this section are in the ${\overline{\textrm{MS}}}$ scheme. The perturbative error is estimated as in [Sec. \[sec:results\]]{}, by a 500 point scan of theory parameters (see App. \[app:scan\]).
Before we fit for the higher power corrections, we will check how well our factorization theorem predicts the experimental cumulants using a simple exponential model for the nonperturbative soft function (the model with only one coefficient $c_0=1$ from Refs. [@Abbate:2010xh; @Ligeti:2008ac]). This model has higher power corrections that are determined by its one parameter $\Omega_1$: $\Omega_2^\prime=\Omega_1^2/4$, $\Omega_3^\prime=\Omega_1^3/8$, $\Omega_4^\prime=3\,\Omega_1^4/32$, $\Omega_5^\prime=3\,\Omega_1^5/32$. Results are shown in Fig. \[fig:cumulant-moments-prediction\], where good agreement between theory and data is observed.
For the $M_n'$ in Fig. \[fig:cumulant-moments-prediction\] we also observe that $M_{n+1}^\prime /M_n^\prime \sim 1/10$, so the $(n+1)$-th order cumulant is generically one order of magnitude smaller than the $n$-th order cumulant.
Next we will fit for the power correction parameters $\tilde\Omega_2^\prime$, ${\overline M}_{2,1}\,\Omega_{1,1}$, and $\Omega_3^\prime$. For this analysis we neglect QED and $b$-mass effects. To facilitate this we consider the difference between the experimental cumulants $M_n'$ and the perturbative theoretical cumulants $\hat M_n'$, namely $M_2'-\hat M_2'$ and $M_3'-\hat M_3'$. From [Eqs. and ]{} these differences are determined entirely by the power correction parameters we wish to fit. The results are shown in Tab. \[tab:omega-fits\] and the upper two panels of Fig. \[fig:omega-plots\]. From the $M_2'-\hat M_2'$ fit a fairly precise result is obtained for $(\tilde \Omega_2^\prime)^{1/2}$. Its central value of $740\,{\rm MeV}$ is compatible with $\sim 2\Lambda_{\rm QCD}$, and hence agrees with naive dimensional analysis. Interestingly, we have checked that including a constant and $1/Q$ term in the second cumulant fit one finds that their coefficients are compatible with zero, in support of the theoretically expected $1/Q^2$-dependence.
For the fit to $M_3'-\hat M_3'$ there is a strong correlation between $\Omega_3^\prime$ and ${\overline M}_{2,1}\,\Omega_{1,1}$ even though they occur at different orders in $1/Q$. Since the $\chi^2$ is quadratic in these two parameters we can determine the linear combinations that exactly diagonalize their correlation matrix: $$\begin{aligned}
\label{eq:linear-combinations}
\Theta_2 & \equiv \bigg[\frac{6\,{\overline
M}_{2,1}}{0.07}\bigg] \frac{\Omega_{1,1}}{4} + (0.3105\,{\rm GeV}^{-1})\,
\Omega_3^\prime
\,,\\
\Theta_3 & \equiv \Omega_3^\prime
-(0.3105\,{\rm GeV})\, \bigg[\frac{6\,{\overline
M}_{2,1}}{0.07}\bigg] \frac{\Omega_{1,1}}{4}
\,.\nonumber\end{aligned}$$ Note that these combinations arise solely from experimental data. We have presented the coefficients of these combinations grouping together a factor of $(6{\overline M}_{2,1}/0.07)$, which is close to unity if $6{\overline
M}_{2,1}\simeq \hat M_1$. The results in Tab. \[tab:omega-fits\] exhibit a reasonable uncertainty for $\Theta_2$, but a large uncertainty for $\Theta_3$. Hence, at this time it is not possible to determine the original parameters $\Omega_3^\prime$ and ${\overline M}_{2,1}\,\Omega_{1,1}$ independently. As in the previous case, the fit does not exhibit any evidence for a $1/Q$ correction, confirming the theoretical prediction for this cumulant.
In Fig. \[fig:omega-plots\] we also show results for cumulant differences $M_n^\prime-\hat M_n^{\prime}$ versus $Q$ for $n=4$ and $n=5$. In all cases $n=2,3,4,5$ the perturbative cumulants $\hat M_n^{\prime}$ are the largest component of the cumulant moments $M_n^\prime$, as can be verified by the reduction of the values by a factor of $2$–$3$ in Fig. \[fig:omega-plots\] compared to the values in Fig. \[fig:cumulant-moments-prediction\]. We also observe an order of magnitude suppression between the $(n+1)$’th and $n$’th terms, $(M_{n+1}'-\hat M_{n+1}')/(M_n'-\hat M_n')\sim 1/10$. For $n=4,5$ the OPE formula in [Eq. ]{} involves both $2^n\Omega_n^\prime/Q^n$ terms and terms with non-trivial perturbative coefficients: $(2\,n\,\overline M_{n-1,1}
\Omega_{1,1})/Q^2+\ldots$ (where here the ellipses are terms at $1/Q^3$ and beyond). If the former dominated we would expect a suppression by $2\,\Lambda_{\rm QCD}/Q$ for the $(n+1)$’th versus $n$’th term. The observed suppression by $1/10$ is less strong and is instead consistent with domination by the $1/Q^2$ power correction terms in the $n=4,5$ cumulant differences. This would imply $[(n+1) \overline M_{n,1}] / [n \overline M_{n-1,1}]\sim 1/10$ and could in principle be verified by an explicit computation of these coefficients. In Fig. \[fig:omega-plots\] we show fits to a $1/Q^2$ power correction, which are essentially dominated by the lowest energy point at the Z-pole. The results are $\sqrt{8\,\overline{M}_{3,1}\,\Omega_{1,1}}=0.20\pm0.08$ from fits to $M_4^\prime$ and $\sqrt{10\,\overline{M}_{4,1}\,\Omega_{1,1}}=0.07\pm0.06$ from fits to $M_5^\prime$. These values agree with our expectation of the $\sim 1/10$ suppression between the two $\overline M_{n,1}$ perturbative coefficients.
In this section we have determined the $1/Q^2$ power correction parameter $\tilde \Omega_2^\prime$ with $25\%$ accuracy, and find it is $3.8\,\sigma$ different from zero. For the higher moments there are important contributions from a $\Omega_{1,1}/Q^2$ power correction, which appears to even dominate for $n\ge 4$. Clearly experimental data supports the pattern expected from the OPE relation in Eq. (\[eq:OPE-subleading\]).
Conclusions {#eq:conclusions}
===========
In this work we have used a full $\tau$-distribution factorization formula developed by the authors in a previous publication [@Abbate:2010xh] to study moments and cumulant moments (cumulants) of the thrust distribution. Perturbatively it incorporates $\mathcal{O}(\alpha_s^3)$ matrix elements and nonsingular terms, a resummation of large logarithms, $\ln^k\tau$, to ${\mbox{N${}^3$LL}\xspace}$ accuracy, and the leading QED and bottom mass corrections. It also describes the dominant nonperturbative corrections, is free of the leading renormalon ambiguity, and sums up large logs appearing in perturbative renormalon subtractions.
Theoretically there are no large logs in the perturbative expression of the thrust moments, and when normalized in the same way the perturbative result from the full $\tau$ code with resummation agrees very well with the fixed order results. Nevertheless, when the code is properly self normalized it significantly improves the order-by-order perturbative convergence towards the ${\cal O}(\alpha_s^3)$ result. In particular, the results remain within the perturbative error band of the previous order, in contrast to what is observed using fixed order expressions. This lends support to the theoretical uncertainty analysis from the code with resummation.
From fits to the first moment of the thrust distribution, $M_1$, we find the results for $\alpha_s(m_Z)$ and the leading power correction parameter $\Omega_1$ given in [Eq. ]{}. They are in nice agreement with values from the fit to the tail of the thrust distribution in Ref. [@Abbate:2010xh]. The moment results have larger experimental uncertainties, and these dominate over theoretical uncertainties, in contrast with the situation in the tail region analysis of Ref. [@Abbate:2010xh]. Repeating the $M_1$ fit using a fixed order code with no $\ln\tau$ resummation, but still retaining the summation of large logs in the perturbative renormalon subtractions, yields fully compatible results for $\alpha_s(m_Z)$ and $\Omega_1$.
Using a Fourier space operator product expansion we have parameterized higher order power corrections which are beyond the leading factorization formula, and analyzed the OPE both for moments $M_n$ and cumulants $M_n'$. In the moments $M_n$ the $\Omega_1/Q$ power correction from the leading factorization theorem enters with a perturbative suppression in its coefficient, and dominates numerically over higher $1/Q$ corrections. In contrast, the cumulants $M_{n\ge 2}'$ depend on higher order cumulant power corrections $\Omega_n'/Q^n$ from the leading factorization theorem, and are independent of $\Omega_1/Q$, …, $\Omega^\prime_{n-1}/Q^{n-1}$. Data on these cumulants appear to indicate that they receive important contributions from a $1/Q^2$ power correction that enters at a level beyond the leading thrust factorization theorem. Thus the OPE reveals that cumulants are appealing quantities for exploring subleading power corrections. We performed a fit to the second cumulant and determined a non-vanishing $\tilde
\Omega_2^\prime/Q^2$ power correction with a precision of $25\%$.
It would be interesting to extend the analysis performed here, based on OPE formulas related to factorization theorems, to other event shape moments and cumulants. Examples of interest include the heavy jet mass event shape [@Clavelli:1979md; @Chandramohan:1980ry; @Clavelli:1981yh; @Catani:1991bd; @Chien:2010kc], angularities [@Berger:2003iw; @Hornig:2009vb], as well as more exclusive event shapes like jet broadening [@Catani:1992jc; @Dokshitzer:1998kz; @Chiu:2011qc; @Becher:2011pf; @Chiu:2012ir]. Other event shape moments were considered at ${\cal O}(\alpha_s^3)$ in Ref. [@Gehrmann:2009eh] in the context of the dispersive model for the $1/Q$ power corrections.
This work was supported in part by the Office of Nuclear Physics of the U.S. Department of Energy under the Contracts DE-FG02-94ER40818, DE-FG02-06ER41449, the European Community’s Marie-Curie Research Networks under contract MRTN-CT-2006-035482 (FLAVIAnet), MRTN-CT-2006-035505 (HEPTOOLS) and PITN-GA-2010-264564 (LHCphenOnet), and the U.S. National Science Foundation, grant NSF-PHY-0969510 (LHC Theory Initiative). VM has been supported in part by a Marie Curie Fellowship under contract PIOF-GA-2009-251174, and IS in part by a Friedrich Wilhelm Bessel award from the Alexander von Humboldt foundation. VM, IS and AHH are also supported in part by MISTI global seed funds. We thank C. Pahl for useful discussions concerning the treatment of JADE experimental data. MF thanks S. Fleming for discussions.
Theory parameter scan {#app:scan}
=====================
parameter default value range of values
----------------------- ----------------- --------------------------
$\mu_0$ $2$[GeV]{} $1.5$ to $2.5$ [GeV]{}
$n_1$ $5$ $2$ to $8$
$t_2$ $0.25$ $0.20$ to $0.30$
$e_J$ $0$ $-1$, $0$, $1$
$e_H$ $1$ $0.5$ to $2.0$
$n_s$ $0$ $-1$, $0$, $1$
$\Gamma^{\rm cusp}_3$ $1553.06$ $-1553.06$ to $+4659.18$
$j_3$ $0$ $-3000$ to $+3000$
$s_3$ $0$ $-500$ to $+500$
$\epsilon_2$ $0$ $-1$, $0$, $1$
$\epsilon_3$ $0$ $-1$, $0$, $1$
: Theory parameters relevant for estimating the theory uncertainty, their default values and range of values used for the theory scan during the fit procedure.[]{data-label="tab:theoryerr"}
{width="1\linewidth"}
In this Appendix we describe the method we use to estimate uncertainties in our analysis. We will briefly review the profile functions and the theoretical parameters which determine the theory uncertainty. We will also describe the scan over those parameters and the effects they have on the fit results.
The profile functions used in Ref. [@Abbate:2010xh], to which we refer for a more extensive description, are $\tau$-dependent factorization scales which allow us to smoothly interpolate between the theoretical constraints the hard, jet and soft scale must obey in different regions of the thrust distribution: $$\begin{aligned}
\label{eq:profile123}
& \text{1) peak:} & &
{\mu_H}\sim Q \,,\ \
{\mu_J}\sim \sqrt{\Lambda_{\rm QCD}\,Q}\,,\
& {\mu_S}&\gtrsim \Lambda_{\rm QCD}\,,
{\nonumber}\\
& \text{2) tail:} & &
{\mu_H}\sim Q \,,\ \
{\mu_J}\sim Q\sqrt{\tau} \,,\
&{\mu_S}&\sim Q\,\tau \,,
{\nonumber}\\
& \text{3) far-tail:} & &
{\mu_H}={\mu_J}= {\mu_S}\sim Q \,.\end{aligned}$$ The factorization theorem derived for thrust in Ref. [@Abbate:2010xh] is formally invariant under ${\cal O}(1)$ changes of the profile function scales. The residual dependence on the choice of profile functions constitutes one part of the theoretical uncertainties and provides a method to estimate higher order perturbative corrections. We adopt a set of six parameters that can be varied in our theory error analysis which encode this residual freedom while still satisfying the constraints in Eq. (\[eq:profile123\]).
For the profile function at the hard scale, we adopt $$\begin{aligned}
{\mu_H}=&\,e_H\,Q,\end{aligned}$$ where $e_H$ is a free parameter which we vary from $1/2$ to $2$ in our theory error analysis.
For the soft profile function we use the form $$\begin{aligned}
{\mu_S}(\tau)=\left\{\begin{array}{ll}\mu_0+\frac{b}{2 t_1} \tau^2,\hskip1.6cm
& 0\leq \tau\leq t_1,\\
b \, \tau+d, & t_1\leq \tau\leq t_2,\\
{\mu_H}-\frac{b}{1-2 t_2}(\frac{1}{2}-\tau)^2, &
t_2\leq \tau\leq \frac{1}{2}.\end{array} \right. \end{aligned}$$ Here, $t_1$ and $t_2$ represent the borders between the peak, tail and far-tail regions. $\mu_0$ is the value of $\mu_S$ at . Since the thrust value where the peak region ends and the tail region begins is $Q$ dependent, $t_1\simeq 1/Q$, we define the parameter $n_1$ by $t_1=n_1/(Q/1\,\mbox{GeV})$. To ensure that ${\mu_S}(\tau)$ is a smooth function, the quadratic and linear forms are joined by demanding continuity of the function and its first derivative at $\tau=t_1$ and $\tau=t_2$, which fixes $b=2\,\big({\mu_H}-\mu_0\big)/\big(t_2-t_1+\frac{1}{2}\big)$ and $d=\big[\mu_0(t_2+\frac{1}{2})-{\mu_H}\,t_1\big]/\big(t_2-t_1+\frac{1}{2}\big)$. In our theory error analysis we vary the free parameters $n_1$, $t_2$ and $\mu_0$.
The profile function for the jet scale is determined by the natural relation between the hard, jet, and soft scales $$\begin{aligned}
{\mu_J}(\tau)=\bigg(1+
e_J\Big(\frac{1}{2}-\tau\Big)^2\bigg)\,\sqrt{{\mu_H}\,{\mu_S}(\tau)}\,.\end{aligned}$$ The term involving the free ${\cal O}(1)$-parameter $e_J$ implements a modification to this relation and vanishes in the multijet region where $\tau=1/2$. We use a variation of $e_J$ to include the effect of such modifications in our estimation of the theoretical uncertainties.
In our theory error analysis we vary $\mu_{\rm ns}$ to account for our ignorance on the resummation of logarithms of $\tau$ in the nonsingular corrections. We consider three possibilities $$\begin{aligned}
\mu_{\rm ns}(\tau)=\left\{\begin{array}{ll}
{\mu_H},\hskip3.08cm & n_s=1,\\
{\mu_J}(\tau), & n_s=0, \\
\frac{1}{2}[\,{\mu_J}(\tau)+{\mu_S}(\tau)\,], & n_s=-1.
\end{array}\right. \end{aligned}$$
The complete set of theoretical parameters and the their ranges of variation are summarized in Table \[tab:theoryerr\].
Besides the parameters associated with the profile functions, the other theory parameters are $\Gamma_3^{\rm cusp}$, $j_3$, $s_3$, and $\epsilon_{2,3}$. The cusp anomalous dimension at $\mathcal O(\alpha_s^4)$, $\Gamma_3^{\rm cusp}$ is estimated via Padé approximants and we assign a 200% uncertainty to this approximation. $j_3$ and $s_3$ represent the nonlogarithmic 3-loop term in the position-space hemisphere jet and soft functions, respectively. These two parameters and their variations are estimated via Padé approximations. The last two parameters $\epsilon_2$ and $\epsilon_3$ allow us to include the statistical errors in the numerical determination of the nonsingular distribution at two (from EVENT2 [@Catani:1996jh; @Catani:1996vz]) and three (from EERAD3 [@GehrmannDeRidder:2007bj]) loops, respectively.
At each order we randomly scan the parameter space summarized in Table \[tab:theoryerr\] with a uniform measure, extracting 500 points. Each of the points in Fig. \[fig:M1alpha\] is the result of the fit performed with a single choice of a point in the parameter space. The contour of the area in the $\alpha_s$-$2\,\Omega_1$ plane covered by the fit results at each given order is fitted to an ellipse, which is interpreted as a 1-$\sigma$ theoretical uncertainty. The ellipse is determined as follows: in a first step we determine the outermost points on the plane (defined by the outermost convex polygon). We then perform a fit to these points using a $\chi^2$ which is the square of the formula for an ellipse: $$\begin{aligned}
\label{eq:ellipse-fitter}
\chi^2_{\rm ellipse}
& = \sum_i \big[a\,(\alpha_i-\alpha_0)^2+ 4\,b\, (\Omega_i - \Omega_0)^2
\\
& +\,2\, c \,(\alpha_i-\alpha_0)(\Omega_i - \Omega_0)\,-\,1\big]^2\,.\nonumber\end{aligned}$$ Here the sum is over the outermost points. The coordinates for the center of the ellipse, $\alpha_0$ and $\Omega_0$, are fixed ahead of time to the average of the maximum and minimum values of $\alpha_s(m_Z)$ and $\Omega_1$ in the scan. We then minimize $\chi^2_{\rm ellipse}$ to determine the parameters $a,b,c$ of the ellipse.
One could further express the coefficients $a$ and $b$ by $$\begin{aligned}
\label{eq:ab}
a & = \frac{1\,+\,\sqrt{1\,+\,4\,c^2 \,\Delta \alpha ^2 \,\Delta \Omega
^2}}{2\, \Delta \alpha ^2}\,, \\
b & = \frac{1\,+\,\sqrt{1\,+4\,\,c^2 \,\Delta \alpha ^2 \,\Delta \Omega ^2}}{8\, \Delta \Omega ^2}\,, \nonumber\end{aligned}$$ where $\Delta \alpha$ and $\Delta \Omega$ are just the half of the difference of the maximum and minimum values of $\alpha_s(m_Z)$ and $\Omega_1$, respectively, on the ellipse. Setting $\Delta \alpha$ and $\Delta \Omega$ to the corresponding values obtained from the fit points of the scan (i.e. the perturbative errors) the coefficients $a$ and $b$ can be fixed and only $c$ remains as a free parameter. The minimization of $\chi^2_{\rm
ellipse}$ in Eq. (\[eq:ellipse-fitter\]) gives almost identical results regardless of whether or not Eqs. (\[eq:ab\]) are imposed.
In Fig. \[fig:barplots\] we vary a single parameter of Table \[tab:theoryerr\] keeping all the others fixed at their respective default values, and we plot the change of $\alpha_s(m_Z)$ and $\Omega_1$ as compared to the values obtained from the first moment thrust fit with the default setup. In the figure, the dark shaded blue area represents a variation where the parameter is larger than the default value, and the light shaded green one where the parameter is smaller. The largest uncertainty is associated with the variation of the hard scale, $e_H$. The value of $\alpha_s(m_Z)$ is similarly affected by the uncertainty of the profile function parameters, the statistical error from the numerical determination of the 3-loop nonsingular distribution from EERAD3 [@GehrmannDeRidder:2007bj], and by the parameter $j_3$. It is rather insensitive to the variation of the 4-loop cusp anomalous dimension and the statistical error from the determination of the 2-loop nonsingular contribution to the thrust distribution. The value of $\Omega_1$ is mainly sensitive to the profile function parameters and $\epsilon_3$, but is quite insensitive to $j_3$.
[^1]: The cumulants begin to differ for $n\ge 4$ from the so-called central moments, $\langle (\tau-M_1)^n\rangle$. Both cumulants and central moments are shift independent, but the cumulants are slightly preferred because they are only sensitive to a single moment of the leading order soft function in the thrust factorization theorem.
[^2]: We thank C. Pahl for clarifying precisely how this was done.
[^3]: Another approach to hadronization corrections to moments of event shapes distributions based on renormalons is that of Gardi and Grunberg [@Gardi:1999dq].
[^4]: The cumulants of a Gaussian are all zero for $n>2$, and the cumulants of a delta function are all zero for $n>1$.
[^5]: Earlier discussions of shape functions for thrust can be found in Refs. [@Korchemsky:1999kt; @Korchemsky:2000kp].
[^6]: This manipulation is valid when the renormalization scales of the jet and soft function which implement resummation are $\mu_i=\mu_{i}(\tau-p/Q)$, rather than the more standard $\mu_i(\tau)$ used in [@Abbate:2010xh]. Both choices are perturbatively valid, and we have checked that the difference is $0.4\,\%$ for $M_1$, rising to $0.8\,\%$ for $M_5$, and hence is always well within the perturbative uncertainty.
[^7]: Throughout this publication N${}^n$LL corresponds to the same order counting as N${}^n$LL${}^\prime$ in Ref. [@Abbate:2010xh].
[^8]: At N$^3$LL in our most complete theory set up the norm of the distribution and total hadronic cross section are fully compatible within uncertainties, so it does not matter which is used. Following Ref. [@Abbate:2010xh], at N$^3$LL we choose to normalize the distribution with the fixed-order total hadronic cross section since it is faster.
[^9]: On the experimental side, Ref. [@Gehrmann:2009eh] uses only the new JADE data from [@Pahl:2008uc] and OPAL data. In our analysis the new JADE was excluded, but we utilized a larger dataset that includes ALEPH, OPAL, L3, DELPHI, AMY, TASSO, and older JADE data. This may have a non-negligible impact on the outcome of the comparison.
[^10]: We thank Christoph Pahl for providing details on the use of correlation matrices for moments.
[^11]: One might also construct the correlation matrices using the statistical and systematic errors from the thrust distributions themselves. Bins in distributions are statistically independent and systematic correlations are estimated using the minimal overlap model. Unfortunately this can introduce biases, and we thank Christoph Pahl for clarifying this point.
|
---
abstract: 'The concept of potential energy landscapes is applied in many areas of science. We experimentally realize a random potential energy landscape (rPEL) to which colloids are exposed. This is achieved exploiting the interaction of matter with light. The optical set-up is based on a special diffuser, which creates a top-hat beam containing a speckle pattern. This is imposed on colloids. The effect of the speckle pattern on the colloids can be described by a rPEL. The speckle pattern as well as the rPEL are quantitatively characterized. The distributions of both, intensity and potential energy values, can be approximated by Gamma distributions. They can be tuned from exponential to approximately Gaussian with variable standard deviation, which determines the contrast of the speckles and the roughness of the rPEL. Moreover, the characteristic length scales, e.g. the speckle size, can be controlled. By rotating the diffuser, furthermore, a flat potential can be created and hence only radiation pressure exerted on the particles.'
author:
- Jörg Bewerunge
- 'Stefan U. Egelhaaf'
bibliography:
- 'RPEL\_arxive.bib'
title: Experimental creation and characterization of random potential energy landscapes exploiting speckle patterns
---
Introduction {#sec:introduction}
============
A potential energy surface is a multi-dimensional surface that represents the potential energy of a system as a function of the coordinates and/or other parameters of its constituents, usually atoms, molecules or particles [@Wales2004]. Since its topographical features resemble a landscape with mountain ranges, valleys and passes, frequently it is referred to as a potential energy landscape (PEL), despite typically being multi-dimensional. The PEL defines all the thermodynamic and kinetic properties of a system. The evolution of a system can pictorially be described by the motion of a point on the PEL.
The concept of a PEL is successfully used in many fields of science to determine the properties and behavior of systems ranging from small to polymeric (bio)molecules and from atomic clusters to biological cells [@Wales2004]. They are used to describe, e.g., the particle dynamics in dense and crowded systems [@Heuer2008; @Debenedetti2001; @Angell1995; @Stillinger1995], on surfaces [@Jardine2004; @Cordoba2014; @Hsieh2014], between magnetic domains [@Tierno2010], and in inhomogeneous materials [@Chen2000; @Weiss2004; @Hofling2013; @Tolic-Norrelykke2004] as well as the effects of external potentials on the dynamics of ultracold atoms [@White2009; @Robert2010], quantum gases [@Bouyer2010], Bose-Einstein condensates [@Lye2005; @Fort2005; @Clement2005; @Clement2006], and their applications to atom cooling and trapping [@Horak1998], and also include the investigation of the minimum energy conformations of molecules [@Wales2004], and the folding and association of proteins and DNA [@Baldwin1994; @Dill1997; @Durbin1996; @Frauenfelder1991; @Janovjak2007].
Here we experimentally create a PEL to which colloidal particles are exposed and which changes, e.g., their arrangement and dynamics [@Bouchaud1990; @Dean2007; @Sengupta2005; @Isichenko1992; @Goychuk2014; @Banerjee2014; @Wales2004; @Zwanzig1988]. As a model system, it can help to improve our understanding of the underlying principles governing the behavior in PELs and being common to different systems.
A PEL can experimentally be realized by exploiting the interaction of light with matter [@Ashkin1986; @Ashkin1992]. We focus on large colloidal particles with a refractive index larger than the one of the dispersing liquid. Their interaction with light is usually described by two forces [@Ashkin1986; @Ashkin1992]: a scattering force or ‘radiation pressure’, which pushes the particles along the beam, and a gradient force, which pulls particles towards regions of high intensity. A classical application of this effect is optical tweezers which are used to trap and manipulate individual colloidal particles or groups of particles [@Ashkin1986; @Ashkin1992; @Grier2003; @Dholakia2008]. Rather than tightly focused beams, extended light fields can be used to create a PEL [@Evers2013b]. Light fields of almost any shape have been generated using spatial light modulators [@Hanes2009; @Hanes2012a; @Hanes2012b; @Hanes2013; @Evers2013a; @Evers2013b] or acousto-optic deflectors [@Bowman2013; @Neuman2004; @Juniper2012], while crossed laser beams [@Ackerson1987; @Jenkins2008b; @Dalle-Ferrier2011] and other arrangements [@Bechinger2001; @Mikhael2008; @Mikhael2010] have been used to create specific light fields.
Randomly-modulated intensity patterns, so-called laser speckles [@Dainty1976; @Goodman2007], can be used to create a random potential energy landscape (rPEL). The landscape can be rationalized as a superposition of many independent randomly-distributed optical traps. They have been realized using various approaches: holographic methods to produce one [@Hanes2009; @Hanes2012a; @Hanes2012b; @Hanes2013] and two-dimensional [@Evers2013a; @Mosk2012] patterns, optical fibers for two-dimensional patterns [@Volpe2014] as well as diffusers for one [@Boiron1999; @Fort2005; @Clement2005; @Clement2006], two [@Horak1998; @Lye2005; @Brugger2015] and three-dimensional [@White2009; @Shvedov2010; @Kondov2011; @Douglass2012] patterns.
We use a special diffuser [@Sales2003; @Morris2007; @Sales2012; @Dickey2014] to create a random light field, that is a fully developed speckle pattern. Due to the light-matter interactions, a colloidal particle exposed to the speckle pattern will experience a rPEL whose local value depends on the light intensity ‘detected’ by the particle. Since the particles are not point-like, the local potential value depends on the intensity distribution over the whole particle volume [@Chowdhury1991; @Loudiyi1992; @Pelton2004; @Jenkins2008b]. We describe the interaction of a colloidal particle with the speckle pattern analogous to a detector that records the speckle intensity over a finite area. This allows us to quantitatively characterize the statistics of the rPEL. As will be shown, the distribution of energy values can be described by a Gamma distribution, and thus ranges from an exponential to an approximately Gaussian distribution, and the correlation length is set by the particle and speckle sizes. The shape and width of the distribution and the correlation length hence can be tuned in a broad range. The obtained rPEL can be applied to study the spatial arrangement and dynamics of colloidal particles in an external potential [@Bouchaud1990; @Dean2007; @Isichenko1992; @Goychuk2014; @Banerjee2014; @Zwanzig1988; @Hanes2012a; @Hanes2012b; @Hanes2013; @Evers2013a; @Evers2013b]. The chosen diffuser allows the creation of a large light field and thus the simultaneous investigation of many particles, which typically results in excellent statistics. Furthermore, its small and compact design simplifies its alignment, movement and rotation.
Creation of Speckle Patterns {#sec:creation}
============================
The set-up (Fig. \[fig:set\_up\]) allows one to create a top-hat beam with a speckle pattern. Thus, there are intensity fluctuations on a small length scale, i.e. about the size of the colloidal particles. At the same time, the top-hat beam implies a constant intensity on a larger length scale, at least the field of view. This light field is used to impose a rPEL, without any underlying long-range variations, on colloidal particles. The particles are constrained to a quasi two-dimensional plane and can simultaneously be observed with an optical microscope.
![(Color online) Schematic representation of the set-up used to create a speckle pattern, to which colloidal particles are exposed and simultaneously imaged with an optical microscope. The central optical element is a special diffuser (ED). It is illuminated by a parallel Gaussian beam and creates a top-hat beam including a speckle pattern, which is steered to the sample plane of an inverted microscope. See text for details. \[fig:set\_up\]](Speckle_fig1_set_up_v6.pdf){width="1.0\linewidth"}
Diffuser {#sec:specklegen}
--------
![(Color online) (A) Individual and (B) averaged intensity patterns in the sample plane. The average is taken over $120$ images with an individual exposure time of $10$ ms obtained with a rate of $10$ fps while the diffuser is rotated with a constant angular velocity of $20\,^\circ/$s. The intensities in arbitrary units are represented by colors (as indicated). (C) Corresponding intensity profiles along the horizontal dashed line in (A) (dashed line) and azimuthal average of the pattern in (A) (yellow line on the left) and in (B) (purple line on the right). Experimental conditions BE 5$\times$ (Tab. \[tab:speckles\]). Measurements are performed with a beam profiler (Coherent LaserCam HR). Grey rectangles in (A) and (B) and grey lines in (C) indicate a field of view of $179 \times 179~\upmu$m$^2$. []{data-label="fig:profile"}](Speckle_fig2_profile_v7.pdf){width="0.9\linewidth"}
The central optical element of the set-up is a special diffuser (RPC Photonics, Engineered Diffuser EDC-1-A-1r, diameter $25.4$ mm) [@Sales2003; @Sales2012; @Dickey2014]. It is a laser-written, randomly-arranged array of microlenses that vary in radius of curvature and size and cover on average an area $A_{\text{l}} \approx 2000~\upmu$m$^2$. When illuminated with an expanded Gaussian laser beam, individual wavefronts originate from each microlens whose characteristics are designed such that a macroscopically uniform intensity pattern with a small divergence is produced, reflecting a top-hat intensity distribution (\[fig:profile\]) [@Korotkova2013; @Dickey2014]. Nonetheless, the random distribution as well as the individual variations of the microlenses and the interference of the corresponding wavefronts lead to microscopic intensity variations, i.e. laser speckles (Figs. \[fig:profile\]A, \[fig:IplusUplusConv\]A). The speckle pattern consists of three-dimensional cylindrical high-intensity regions [@Li2012]. Their orientation and position with respect to the beam axis determine the properties of the speckles in the two-dimensional sample plane [@Li2011a; @Li2011b]. Thus the correct imaging of the modified beam into the sample plane of the microscope is important. Moreover, the speckle size is controlled by the diameter of the illuminating laser beam, determining the number of illuminated microlenses. Their number is chosen large enough to ensure a statistically fully developed speckle pattern [@Dainty1976; @Goodman2007]. By changing the position of the beam on the diffuser, statistically equivalent but independent realizations of the speckle pattern can be created.
![(Color online) (A) Speckle pattern filling a field of view of $108 \times 108~\upmu$m$^2$ corresponding to 480 $\times$ 480 pixels with intensities represented as grey levels (as indicated). Experimental conditions BE 5$\times$ (\[tab:speckles\]). (B) Weight function $D^{\odot}(\mathbf{r})$ (Eq. \[eq:ID\]) representing the volume of a spherical colloidal particle with radius $R=1.4\;\upmu$m. (C) Random potential energy landscape (rPEL) experienced by the particle in the speckle pattern shown in (A) and calculated by convolving the intensity in (A) with $D^{\odot}(\mathbf{r})$. The values of the potential in arbitrary units are represented as grey levels (as indicated). \[fig:IplusUplusConv\]](Speckle_fig3_IplusUplusConv_v6.pdf){width="0.75\linewidth"}
Optical Set-up {#sec:setup}
--------------
The speckle pattern strongly depends on the properties of the beam incident on the diffuser. Fully developed speckles require the interference of many polarized monochromatic wavefronts with random phases and amplitudes and thus a large incident beam that illuminates many microlenses. Furthermore, the optics used to image the modified beam into the sample plane, especially their apertures, have to be designed carefully and, for imaging the speckle pattern, also the detector and its pixel size have to be considered.
A solid-state laser (Laser Quantum, Opus 532, wavelength $\lambda = 532$ nm, maximum intensity $P_{\mathrm{L,max}} = 2.6$ W) provides a monochromatic linearly-polarized Gaussian beam which is slightly elliptical with an axial ratio of $1.12$. The laser beam is steered by two mirrors (M1, M2, \[fig:set\_up\]) to a beam expander (BE, Sill Optics, S6EXZ5076/121) with variable magnification ($1-8\times$) and divergence correction. Using the beam expander, the area $A_{\text{b}}$ of the Gaussian beam hitting the diffuser can be controlled. The diffuser is mounted in a motorized rotation stage (Newport, PR50CC).
The beam leaving the diffuser is divergent (about $1^{\circ}$) and hence collimated by two lenses (L1, L2), where the first lens (L1, Edmund Optics, 1“DCX75, focal length $f_{\mathrm{L1}}=7.5$ cm) is placed a distance $f_{\mathrm{L1}}$ behind the diffuser followed by the second lens (L2, Thorlabs, 2”PCX75, $f_{\mathrm{L2}}=7.5$ cm) in a distance $d_{12}=16$ cm. This leads to a collimated beam with area $A_{\text{c}}$ (about $1\:$cm$^2$ for a 5$\times$ magnification of the beam expander, i.e. BE 5$\times$, Tab. \[tab:speckles\]) on the aperture stop, after the beam has been introduced into the light path of the inverted microscope by a dichroic mirror (D1, Edmund Optics, NT69-901). The condenser (Nikon, TI-C-LWD) then focuses the beam in the sample plane (Figs. \[fig:profile\]A, \[fig:IplusUplusConv\]A). The lenses (L1, L2) together with the condenser form a telecentric illumination system which collimates the beam and focuses it in the sample plane.
The laser beam is removed from the light path of the microscope by a dichroic mirror (D2, Edmund Optics, NT69-901) which deflects the beam into a beam dump (BD). Furthermore, a notch filter (NF, Edmund Optics NT67-119, optical density OD4 at $\lambda=532$ nm) is introduced in front of the camera.
The colloidal particles are observed using an inverted microscope (Nikon, Eclipse Ti-U) with usually a $20\times$ objective (Nikon, CFI S Plan Fluor ELWD, N.A. $0.45$) and an optional additional magnification of $1.5 \times$ resulting in a field of view of $431 \times 345~\upmu$m$^2$ and $288 \times 230~\upmu$m$^2$, respectively. The images are recorded using an 8-bit CMOS camera (PixeLINK, PL-B741F with $1280 \times 1024$ pixels, if not stated otherwise).
To image the speckle pattern at low laser intensities $P_{\mathrm{L}} \approx 1$ mW, the dichroic mirror (D2) and notch filter (NF) are removed. When examining the speckle pattern, a very dilute sample (less than five particles in the field of view) is used. The sedimented particles help to focus on the sample plane and hence record the relevant plane of the speckle pattern. The presence of a sample also leaves the light path unchanged. This ensures that the recorded speckle pattern represents the intensity distribution to which the particles are exposed.
Characteristics of Speckle Patterns {#sec:specklechar}
===================================
If wavefronts of the same wavelength but with random phases and amplitudes, as those created by the microlenses, interfere, speckle patterns occur. Speckles are characterized by intensity fluctuations on a small length scale but a uniform intensity on a larger length scale. The statistics of the intensity fluctuations, such as the intensity distribution and spatial correlation, have been investigated in the context of coherent light reflected from rough surfaces or transmitted through diffusers [@Dainty1976; @Dainty1984; @Goodman2007]. The same statistics are expected for the speckle pattern created by the present diffuser [@Sales2003; @Sales2012; @Dickey2014]. Thus below we follow [@Dainty1976; @Dainty1984; @Goodman2007].
Ideal Speckles
--------------
The interference of many monochromatic and linearly polarized wavefronts with random phasors results in a fully developed speckle pattern. In this case, the intensity distribution of the speckle pattern follows an exponential distribution $$p(I) = \frac{1}{\langle I \rangle} \, \exp{\left(- \frac{I}{\langle I \rangle}\right)}
\label{eq:pI}$$ with the mean intensity $\langle I \rangle$ and standard deviation $\sigma= ( \left \langle I^2 \right\rangle - \langle I \rangle^2 )^{1/2}=\langle I \rangle$.
The normalized standard deviation represents the contrast of the speckle pattern, $$c = \frac{\sigma}{\langle I \rangle} = \frac{\sqrt{\langle I^2 \rangle - \langle I \rangle^2}}{\langle I \rangle} \ \text{.}
\label{eq:contrast}$$ The contrast $c$ quantifies the magnitude of the intensity fluctuations. For an exponential distribution, i.e. a fully developed speckle pattern, it reaches its maximum value $c=1$.
The spatial structure of the speckle pattern is characterized by the normalized spatial autocorrelation function of the intensity [@Goodman2007; @Dainty1976; @Dainty1984; @Li2012] $$C_{\mathrm{I}}(\Delta {\mathbf{r}}) = \frac{\left \langle I(\mathbf{r})I(\mathbf{r} + \Delta {\mathbf{r}})\right \rangle}{\left \langle I(\mathbf{r})\right \rangle^2} - 1 \ \text{,}
\label{eq:intautocorr}$$ where $I(\mathbf{r})$ is the intensity at position $\mathbf{r}$ and $\langle \; \rangle$ can represent both, an ensemble or a spatial average. A spatially infinite pattern without long-range correlations is self-averaging [@Lifshits1988] and hence the spatial and ensemble averages coincide. To a good approximation this also holds for (finite) experimental speckle patterns, similar to the ones considered here [@Clement2006]. The extent of $C_{\mathrm{I}}(\Delta {\mathbf{r}})$ provides a measure for the correlation area of the speckle pattern, that is the characteristic speckle area $$A_{\mathrm{S}} = \iint\limits_{-\infty}^{\hspace{10pt}\infty} \! C_{\mathrm{I}}(\Delta {\mathbf{r}}) \, \mathrm{d}^2\Delta {\mathbf{r}} \ \text{.}
\label{eq:AS}$$
Integrated Speckles
-------------------
In an experimental situation, the optical elements and especially their apertures as well as the finite detector size have to be considered [@Dainty1976; @Dainty1984; @Goodman2007; @Skipetrov2010]. The finite detector size can be taken into account through the weight function $D(\mathbf{r})$, which represents the spatial sensitivity of the detector. Accordingly, the effective detector area $A_{\mathrm{D}}$ can be calculated as $$A_{\mathrm{D}} = \iint\limits_{-\infty}^{\hspace{10pt}\infty} \! D(\mathbf{r}) \, \mathrm{d}^2\mathbf{r} \ \text{.}
\label{eq:AD}$$
In the following also the (deterministic) autocorrelation function of the weight function $D(\mathbf{r})$ is required, which is given by $$C_{\mathrm{D}}(\Delta \mathbf{r}) = \frac{1}{A_{\mathrm{D}}} \iint\limits_{-\infty}^{\hspace{10pt}\infty} \! D(\mathbf{r})D(\mathbf{r}{-}\Delta \mathbf{r}) \, \mathrm{d}^2\mathbf{r} \ \text{.}
\label{eq:CD}$$ Based on $C_{\mathrm{D}}(\Delta \mathbf{r})$, the effective measurement area is defined as $$A_{\text{m}} = \frac{A_{\mathrm{D}}}{C_{\mathrm{D}}(\mathbf{0})}
= \frac{A_{\mathrm{D}}^2}{\iint\limits_{-\infty}^{\hspace{10pt}\infty} \! D^2(\mathbf{r}) \, \mathrm{d^2}\mathbf{r}}\ \text{.}
\label{eq:Am}$$
A detector centered at position $\mathbf{r}$ registers an intensity $I_{\mathrm{D}}(\mathbf{r})$ that is the integrated intensity taking the weight function $D(\mathbf{r})$ into account, i.e. [@Dainty1976; @Skipetrov2010] $$I_{\mathrm{D}}(\mathbf{r}) = \frac{1}{A_{\mathrm{D}}} \iint\limits_{-\infty}^{\hspace{10pt}\infty} \! D(\Delta \mathbf{r})I(\mathbf{r}{+}\Delta \mathbf{r}) \, \mathrm{d}^2\Delta \mathbf{r}\ \text{.}
\label{eq:ID}$$ The intensity distribution for a finite detector is described to a good approximation by a Gamma distribution $$p(I_{\text{D}}) = \frac{1}{\Gamma(M)} \left(\frac{M}{\langle I_{\text{D}} \rangle}\right)^{M} I_{\text{D}}^{M-1} \, \exp \left(-\frac{M}{\langle I_{\text{D}} \rangle}I_{\text{D}}\right)\ \text{,}
\label{eq:PIP}$$ where $\Gamma$ is the Gamma function and the mean of the detected intensity is identical with the mean of the ideal speckle pattern, i.e. $\langle I_{\text{D}} \rangle = \langle I \rangle$, and the normalized standard deviation or contrast is $c_\text{D} = 1/M^{1/2}$, if noise and correlations between neighboring pixels are absent. The parameter $M$ is given by $$M = \left ( \frac{1}{A_{\mathrm{D}}} \iint\limits_{-\infty}^{\hspace{10pt}\infty} \! C_{\mathrm{I}}(\Delta \mathbf{r}) C_{\mathrm{D}}(\Delta \mathbf{r}) \, \mathrm{d^2}\Delta\mathbf{r} \right )^{-1}
\ \text{,}
\label{eq:M}$$ which depends on the spatial characteristics of the speckle pattern and detector, i.e. the correlation functions of the intensity $C_{\mathrm{I}}(\Delta \mathbf{r})$ (Eq. \[eq:intautocorr\]) and weight function $C_{\mathrm{D}}(\Delta \mathbf{r})$ (Eq. \[eq:CD\]), respectively.
If the effective measurement area $A_{\text{m}}$ is large compared to the speckle area $A_{\text{S}}$, i.e. $A_{\text{m}} \gg A_{\text{S}}$, many speckles contribute to the detected intensity $I_{D}(\mathbf{r})$. Then $M$ represents the (large) number of detected speckles, $M \approx A_{\text{m}} / A_{\text{S}} \gg 1$ [@Dainty1976; @Goodman2007; @Skipetrov2010], and $p(I_{\text{D}})$ approaches a Gaussian distribution with mean $\langle I \rangle$ and normalized standard deviation $c$. In the opposite limit of a very small effective measurement area, $A_{\text{m}} \ll A_{\text{S}}$, only one speckle is detected. Thus $M \to 1$ and $p(I_{\text{D}})$ approaches the exponential distribution (Eq. \[eq:pI\]). In this case, neighboring detectors might no longer be independent. If, however, the effective measurement area and speckle area are similar ($A_{\text{m}} \approx A_{\text{S}}$), M can only be (numerically) calculated if $C_{\mathrm{I}}(\Delta \mathbf{r})$ and $D(\mathbf{r})$ are known. Due to the complex effects of the optical components on the speckle pattern, this often is not the case and approximations must be used.
In the following we apply these relationships for different detectors and thus $D({\bf r})$ (the different cases are indicated by superscripts); circular ($D^{\text{\textcolor{black}{\FilledSmallCircle}}}({\bf r})$) and square ($D^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\mathbf{r})$) detector pixels, which are also subjected to smoothing ($D^{\text{\textcolor{black}{\FilledSquareShadowC}}}({\bf r})$) and binning ($D^{{}{black}{{{{}{black}{\OldBoxplus}}}}}({\bf r})$), as well as spherical ($D^{\odot}({\bf r})$) and cubic ($D^{\boxdot}({\bf r})$) particles acting as ‘detectors’.
### Detector Pixel {#sec:pixel}
In our experiments, the speckle patterns are detected by uniform square pixels. Their weight function is $$D^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\mathbf{r}) = \left \{
\begin{array}{c@{\quad \quad}l}
1 & {\text{inside the pixel}} \\
0 & {\text{outside the pixel}}
\end{array} \right.
\label{eq:D_pixel}$$ and hence $A_{\text{m}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}} = A_{\mathrm{D}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}$ are equal to the pixel area. Based on this weight function, $C_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \mathbf{r})$ (Eq. \[eq:CD\]) and $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\mathbf{r})$ (Eq. \[eq:ID\]) can be calculated. Furthermore, it is expected that $p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ can be approximated by a Gamma distribution (Eq. \[eq:PIP\]). However, to calculate the parameter $M^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (Eq. \[eq:M\]), also $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \mathbf{r})$ (Eq. \[eq:intautocorr\]) is required.
If the top-hat beam is approximated by a Gaussian beam, the corresponding result for a Gaussian beam detected by uniform square pixels [@Skipetrov2010; @Goodman2007], $$\label{eq:Gaussian}
C_{\text{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\Delta \mathbf{r}) = \exp{\left ( - \frac{\pi \Delta \mathbf{r}^2}{A_{\text{s}}} \right )} \ \text{, \;\;}$$ can be used. Then, $M^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}$ is given by $$\begin{aligned}
M^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}} = &\Bigg[\sqrt{\frac{A_{\text{S}}}{A_{\text{m}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}}} \operatorname{erf}\left(\sqrt{\frac{\pi A_{\text{m}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}}{A_{\text{S}}}}\right)\nonumber \\
&~-\left(\frac{A_{\text{S}}}{\pi A_{\text{m}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}}\right)\left\{1-\exp\left(-\frac{\pi A_{\text{m}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}}{A_{\text{S}}}\right)\right\}\Bigg]^{-2} \; \text{.}
\label{eq:M2}\end{aligned}$$ For a Gaussian beam detected by uniform circular pixels $$\begin{aligned}
M^{\text{\textcolor{black}{\FilledSmallCircle}}} = &\frac{A_{\text{m}}^{\text{\textcolor{black}{\FilledSmallCircle}}}}{A_{\text{S}}} \Bigg[1 - \exp{\left (-\frac{2A_{\text{m}}^{\text{\textcolor{black}{\FilledSmallCircle}}}}{A_{\text{S}}} \right )}\nonumber \\
&~\times \left \{ I_0 \left ( \frac{2A_{\text{m}}^{\text{\textcolor{black}{\FilledSmallCircle}}}}{A_{\text{S}}} \right ) + I_1 \left ( \frac{2A_{\text{m}}^{\text{\textcolor{black}{\FilledSmallCircle}}}}{A_{\text{S}}} \right ) \right \} \Bigg]^{-1}
\; \text{,}
\label{eq:Mcirc}\end{aligned}$$ where $I_0$ and $I_1$ are modified Bessel functions of the first kind and orders zero and one, respectively. Further geometries have been considered [@Dainty1976; @Clement2006; @Skipetrov2010; @Li2012], but are less appropriate for the present situation.
To check the suitability of the above equations for our experimental situation, in particular the approximation of the top-hat beam by a Gaussian beam, these relations will be compared to our experimental results in \[sec:speckles\].
### Colloidal Particle {#sec:potential}
Colloidal particles are susceptible to electromagnetic radiation if their refractive index is different from the one of the suspending liquid [@Ashkin1986; @Ashkin1992]. Since the particles are not point-like, their response depends on the intensity integrated over their volume [@Chowdhury1991; @Loudiyi1992; @Pelton2004; @Jenkins2008b]. This is analogous to the extended detector described above, except that the particle’s susceptibility (or polarizability) rather than the detector efficiency is relevant. It is proportional to the particle volume traversed by the beam. Since the speckles are oriented in beam direction and their extension in beam direction is much larger than in the sample plane [@Goodman2007; @Li2012], the projection of the particle volume in beam direction is considered. The (projected) particle volume is taken into account through the weight function $D^{\odot}(r)$. For a homogeneous spherical particle the normalized weight function is $$D^{\odot}(r)
= \begin{cases}\frac{1}{R}\sqrt{{R}^{2}-r^{2}} & \mbox{if } r\leq R\\ 0 & \mbox{if } r>R\end{cases} \, \text{,}
\label{eq:D_particle}$$ and shown in \[fig:IplusUplusConv\]B. To obtain its absolute value, material specific parameters describing the light–particle interaction have to be considered [@Ashkin1986; @Ashkin1992; @Pelton2004] and summarized in a ($r$-independent) prefactor. Independent of this constant prefactor, the effective measurement area (Eq. \[eq:Am\]), or rather effective particle area, becomes $$\label{eq:Aeffcirc}
A_{\text{m}}^{\odot} = \frac{8\pi}{9} R^2 \, \text{.}$$ The (deterministic) autocorrelation function of the weight function $D^{\odot}(r)$, that is $C_{\text{D}}^{\odot}(\Delta \mathbf{r})$ (Eq. \[eq:CD\]), can only be determined numerically [@Pelton2004]. Finally, taking into account the particle volume through $D^{\odot}(r)$, the integrated intensity $I_{\text{D}}^{\odot}({\mathbf{r}})$ can be calculated (Eq. \[eq:ID\]) [@Chowdhury1991; @Loudiyi1992; @Pelton2004].
Exploiting the analogy between a colloidal particle and a detector, we expect that the intensity distribution as experienced by the particle, i.e. the rPEL, can be approximated by a Gamma distribution, similar to Eqs. \[eq:PIP\] and \[eq:M\], but its parameter $M$ has to be determined. This analogy is explored and experimentally tested in \[sec:rPEL\].
Results and Discussion {#sec:results}
======================
Speckle Pattern {#sec:speckles}
---------------
Different speckle patterns are created by changing the size of the beam that illuminates the diffuser using the variable beam expander. Magnifications between $3 \times$ and $7 \times$ are possible yielding beam areas on the diffuser $0.3$ cm$^2$ $\lesssim A_{\text{b}} \lesssim 1.7$ cm$^2$ (\[tab:speckles\]). Stationary speckle patterns as well as time-varying speckle patterns, created by rotating the diffuser, are investigated and the data compared to the relations presented above (\[sec:pixel\]) to test their applicability to the present experimental situation.
{width="1.0\linewidth"}
### Stationary Speckle Pattern
![(Color online) (left) Speckle patterns $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ created with different beam areas $A_{\text{b}}$ due to different magnifications of the beam expander (as indicated, Tab. \[tab:speckles\]). Intensities are represented as grey levels (scale at bottom). (right) Corresponding power spectral densities with their values represented by colors (logarithmic scale in arbitrary units at bottom). For clarity the lowest frequencies are shifted to the center. The spurious high values in $x$ and $y$ direction through the origin are caused by boundary effects in the Fourier transform. \[fig:IvsPSD\] ](Speckle_fig4_IvsPSD_v7.pdf){width="1.0\linewidth"}
The observed intensities $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ (Figs. \[fig:IplusUplusConv\]A, \[fig:IvsPSD\], left) resemble speckle patterns with their characteristic intensity fluctuations. A qualitative inspection indicates a decreasing speckle size with increasing beam size. The magnitude of the intensity fluctuations is quantified by the intensity distribution $p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ (\[fig:PI\]) and the contrast $c$ (\[tab:speckles\]). The observed $p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ are well described by an exponential distribution (Eq. \[eq:pI\]), which suggests fully developed speckles. This is consistent with the fact that all beam areas $A_{\text{b}}$ are much larger than the microlens area $A_{\text{l}}$ and hence many microlenses ($A_{\text{b}}/A_{\text{l}} > 10^4$) are illuminated and, in addition, the detector pixels are much smaller than the speckle area, i.e. $A_{\text{m}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} \ll A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$. Only small deviations from an exponential distribution are observed. The smallest intensity occurs with a slightly larger probability. This is attributed to the finite exposure time and sensitivity of the camera, which limit the minimum detectable intensity. If, within the exposure time, too few photons are registered, the pixel will record zero intensity which thus occurs with a slightly too large probability. Also the highest intensities are recorded slightly too frequently due to noise together with the limited dynamic range of the 8-bit camera given the large range of intensity values. Still, the chosen exposure time and laser power provide the optimum compromise.
![(Color online) Normalized intensity distributions $\langle I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} \rangle \; p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ as observed in experiments with different beam areas $A_{\text{b}}$ due to different magnifications of the beam expander (as indicated, \[tab:speckles\]). Each symbol is the average of four data points. The line represents an exponential distribution (Eq. \[eq:pI\]). []{data-label="fig:PI"}](Speckle_fig5_PI_v7.pdf){width="0.9\linewidth"}
The normalized standard deviation of $p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ or contrast $c$ (Eq. \[eq:contrast\]) is found to be close to one (\[tab:speckles\]), which is consistent with fully developed speckles. However, the contrast is slightly larger than one. This might be due to the flat-top instead of a Gaussian beam [@Baykal2013] and additional noise, for example contributed by the camera [@Song2013]. The depolarization and scattering by the (very few) particles in the sample plane might also contribute. The increase of $c$ with decreasing beam area $A_{\text{b}}$ is attributed to slight changes in the divergence of the beam that has not been corrected in this series.
![(Color online) Intensity correlation function $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ as a function of $\Delta r$ in $x$ (light green triangle, left) and $y$ (dark green triangle, right) directions as observed in the experiment BE 5$\times$ (Tab. \[tab:speckles\]). Predictions for a Gaussian beam detected by square pixels $C_{\text{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\Delta r)$ (Eq. \[eq:Gaussian\]) are fitted to the two data sets (solid lines). The length at which $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ decays to $1/{\text{e}}$ (indicated for the $x$ direction) is related to the speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$. []{data-label="fig:CI_deltar"}](Speckle_fig6_CI_deltar_v7.pdf){width="0.9\linewidth"}
To quantify the characteristic length scale of the fluctuations, i.e. the speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (Eq. \[eq:AS\]), the spatial intensity correlation function $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta {\mathbf{r}})$ (Eq. \[eq:intautocorr\]) is determined from the intensity $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$. It is separately calculated in $x$ and $y$ direction (\[fig:CI\_deltar\]) to account for the slightly elliptical beam (\[sec:setup\]). The prediction for a Gaussian beam detected by a square pixel $C_{\text{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\Delta \mathbf{r})$ (Eq. \[eq:Gaussian\]) is fitted to the experimental data sets. Despite the approximation of the top-hat beam by a Gaussian beam, it describes the data very well. The small deviations at large $\Delta \mathbf{r}$ indicate some non-Rayleigh statistics. This is also suggested by the slightly too large contrast $c$ (\[tab:speckles\]) and small deviations of the intensity probability distribution $p(I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ from the ideal exponential case, and has been observed already previously [@Bromberg2014]. Furthermore, there are small fluctuations at large $\Delta r$ which are attributed to the circular apertures. The lengths at which $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \mathbf{r})$ decays to $1/{\text{e}}$, $\Delta r_x$ and $\Delta r_y$ (\[fig:CI\_deltar\]), provide a measure of the speckle sizes in $x$ and $y$ directions, respectively, and the speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} = \pi (\Delta r_x^2{+}\Delta r_y^2)$. They indicate slightly elliptical speckles with an axial ratio of about $1.1$, consistent with the elliptical beam (Sec. \[sec:setup\]).
For an effective measurement area much smaller than the speckle area ($A_{\text{m}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} \ll A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$), hence well above the Nyquist limit $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} \approx 2A_{\text{m}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} = 2\:\text{px}$, equivalent information can be obtained from the width of the power spectral density (Fig. \[fig:IvsPSD\], right), which is inversely proportional to the width of the spatial correlation function [@Kirkpatrick2007; @Kirkpatrick2008]. With decreasing speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$, indeed the peak at low frequencies becomes smaller and broader (\[fig:IvsPSD\], top to bottom), consistent with the findings based on $C_{\text{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\Delta \mathbf{r})$.
### Time-varying Speckle Pattern {#sec:rotSpec}
![(Color online) Angular intensity correlation function $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi)$ based on speckle patterns obtained with orientations of the diffuser that differ by $\Delta \phi$ (Eq. \[eq:ImageCorr\]). The red line represents the calculation based on Eq. \[eq:CI\_phi\_r\]. Experimental conditions BE 5$\times$, which implies a speckle area $A_\text{S}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} = 5.1\:\upmu$m (Tab. \[tab:speckles\]), a square field of view with lateral length $L_\text{v} = 202.2\:\upmu$m and an exposure time of $1.1$ms. \[fig:StepCorrelations\] ](Speckle_fig7_StepCorrelations_v7.pdf){width="0.9\linewidth"}
Our set-up offers the possibility to rotate the diffuser around the optical axis. While a rotation does not change the intensity statistics, the actual speckle pattern is changed and represents another realization, provided the rotation angle $\Delta \phi$ was large enough. The correlation between two speckle patterns, $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi, {\mathbf{r}})$ and $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi{+}\Delta \phi, {\mathbf{r}})$ is quantified by the angular correlation function, namely $$C_{\mathrm{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi) = \frac{\left \langle I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi, \mathbf{r}) I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi{+}\Delta \phi,\mathbf{r}) \right \rangle}{\left \langle I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}\right \rangle^2} - 1 \ \text{,}
\label{eq:ImageCorr}$$ where $\langle I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} \rangle$ is independent of the angle $\phi$ and $\langle \; \rangle$ represents an average over all pixels, i.e. all ${\bf r}$, and realizations. As expected, $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi)$ decreases with increasing $\Delta \phi$ (\[fig:StepCorrelations\]). Only very small correlations, say $10\:\%$, are observed beyond $\Delta \phi_{\text{c}} \approx 2^\circ$. Thus, rotations with $\Delta \phi \gg \Delta \phi_{\text{c}}$ are expected to result in essentially uncorrelated realizations of the speckle pattern.
The definition of $C_{\mathrm{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi)$ is analogous to the spatial intensity correlation function $C_{\mathrm{I}}(\Delta {\bf r})$ (Eq. \[eq:intautocorr\]), which can be used to calculate $C_{\mathrm{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi)$. A rotation of the diffuser by $\Delta \phi$ implies a displacement of the speckle pattern by $\boldsymbol\Delta \boldsymbol\phi \times {\bf r}$, which depends on the distance $r = | {\bf r} |$ from the optical axis around which the speckle pattern is rotated. Thus, $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi, \mathbf{r}) I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi{+}\Delta \phi,\mathbf{r}) = I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi, \mathbf{r}) I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\phi,\mathbf{r}{-}\boldsymbol\Delta \boldsymbol\phi{\times}{\bf r})$, which relates $C_{\mathrm{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta \phi)$ to $C_{\mathrm{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta {\bf r})$. Averaging over a circular field of view with radius $R_{\text{v}}$ and square pixels, and using the correlation function for a Gaussian beam detected by square pixels, $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta r)$ (Eq. \[eq:Gaussian\]), yields $$\begin{aligned}
C_{\mathrm{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(\Delta \phi)
=& \frac{1}{\pi R_{\text{v}}^2} \int_0^{R_{\text{v}}}{C_{\text{I}}^{{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}}(r \Delta \phi) \; 2 \pi r \; {\mathrm{d}}r} \nonumber \\
=& \frac{A_{\text{s}}}{\pi R_{\text{v}}^2} \; \frac{1}{\Delta \phi^2} \left \{ 1 - \exp{\left (-\frac{\pi R_{\text{v}}^2}{A_{\text{s}}} \Delta \phi^2 \right )} \right \} \; .
\label{eq:CI_phi_r}\end{aligned}$$ For a square field of view with size $L_\text{v}^2$ and square pixels, the corresponding relation involves the error function. However, it can be approximated by a circular field of view, i.e. \[eq:CI\_phi\_r\], with an effective radius $R_\text{v} \approx 0.57 \, L_\text{v}$, which corresponds to a slightly larger effective area. This prediction is confirmed by the experimental data (\[fig:StepCorrelations\]).
To fully characterize time-varying speckle patterns, the angular velocity has to be considered. This is similar to the situation in speckle contrast analysis, imaging applications and light scattering [@Briers1996; @Duncan2008; @Scheffold2012; @Briers2013]. If the diffuser is rotated, and hence the speckle pattern changed, faster than the particles can follow, i.e. than their relaxation time, the colloidal particles effectively experience a temporally averaged and hence microscopically flat intensity pattern instead of a speckle pattern (\[fig:profile\]). Then, only time-averaged intensities are of interest. Both, averages over many images with short exposure times as well as individual images with long exposure times, are considered. With appropriate camera parameters, both procedures yield equivalent time-averaged intensities [@Scheffold2012]. The average over many realizations indeed shows significantly reduced fluctuations compared to the static speckle pattern (\[fig:profile\]A,B). Nevertheless, a small modulation remains, even in the azimuthal average (\[fig:profile\]C).
Random Potential Energy Landscape {#sec:rPEL}
---------------------------------
Having investigated the speckle patterns, we now consider their effect on spherical colloidal particles that are characterized by the weight function $D^{\odot}(\mathbf{r})$ (Eq. \[eq:D\_particle\], \[sec:potential\]). The effect of a speckle pattern can be described by an external potential $U({\bf r})$, the rPEL (as the one shown in Fig. \[fig:IplusUplusConv\]C). We will now determine the properties of $U({\bf r})$.
### Time-averaged local particle density
The speckle pattern affects the distribution of particles. It is quantified by the time-averaged local particle density $\rho({\mathbf{r}})$ which is determined from the particle locations [@Crocker1996]. The density $\rho({\mathbf{r}})$ for a (quasi) two-dimensional layer of particles with a mean surface fraction $\langle \rho \rangle = 0.25$, i.e. about $1200$ particles in a field of view of $171\times171~\upmu\text{m}^2$, is shown in figure \[fig:rho\_xy\]. A qualitative inspection reveals that $\rho({\mathbf{r}})$ resembles some of the characteristics of the rPEL $U({\mathbf r})$ (\[fig:IplusUplusConv\]C). It exhibits random fluctuations with a comparable characteristic length scale, but also longer-ranged correlations. Furthermore, the maxima of $\rho({\mathbf{r}})$ are more pronounced while the saddle points and minima are blurred. Within reasonable measurement times, the low $\langle \rho \rangle$ and the strongly disordered potential hence do not provide sufficient statistics to obtain space-resolved information on $\rho({\mathbf{r}})$ and thus the potential $U({\mathbf{r}})$. This suggests to investigate samples with larger $\langle \rho \rangle$. However, a straight-forward determination of $U({\mathbf{r}})$ from $\rho({\mathbf{r}})$ through the Boltzmann distribution requires that particle–particle interactions can be neglected and thus that the sample is dilute. In more concentrated systems, the determination of $U({\mathbf{r}})$ requires to apply more involved methods, e.g., liquid-state theory [@Sengupta2005] or Inverse Monte Carlo Simulations [@Bahukudumbi2007]. This is beyond the scope of the present work.
![Time-averaged local particle density $\rho({\mathbf{r}})$ of a (quasi) two-dimensional layer of spherical polystyrene particles with sulfonated chain ends with radius $R=1.4\; \upmu$m, polydispersity 3.2 % and mean surface density $\langle \rho \rangle = 0.25$ in a speckle pattern (BE 5$\times$, Tab. \[tab:speckles\]) created using a moderate laser power ($P_{\text{L}} = 1640$ mW). About $37\,000$ images at $3.75$ fps (AVT, Pike F032B) were recorded and averaged. Densities are represented as grey levels (logarithmic scale in arbitrary units). []{data-label="fig:rho_xy"}](Speckle_fig8_rhoxy_v6.pdf){width="0.8\linewidth"}
### Convolution with the Weight Function of a Spherical Particle
To avoid this complication, we investigate the convolution of the speckle pattern with the weight function of a spherical particle, $D^{\odot}(\mathbf{r})$, and, instead of the full $U({\bf r})$, determine the statistics of $U({\bf r})$, namely the distribution of its values, the magnitude of its fluctuations and its correlation area. In the case of a particle exposed to a light field, $D^{\odot}(\mathbf{r})$ describes the susceptibility of the particle to light (Eq. \[eq:D\_particle\]), but is formally identical to a detector efficiency. The convolution of $D^{\odot}(\mathbf{r})$ with the intensity pattern $I({\bf r})$ yields the total intensity $I_{\text{D}}^\odot({\mathbf{r}})$ that is ‘detected’ by a particle at position ${\bf r}$ (Eq. \[eq:ID\]). Due to the light–matter interaction [@Ashkin1986; @Ashkin1992; @Pelton2004; @Bohren2004; @Rohrbach2005; @Bonessi2007], $I_{\text{D}}^\odot({\mathbf{r}})$ represents an external potential $U({\mathbf{r}})=I_{\text{D}}^\odot({\mathbf{r}})$ imposed on the particle, that is the rPEL. Since $D^{\odot}(r)$ takes into account the volume of the particle, the extended colloidal particle at position $\mathbf{r}$ in the speckle pattern $I({\mathbf{r}})$ can be regarded as a point-like particle in the potential $U({\bf r})=I_{\text{D}}^\odot({\mathbf{r}})$. This procedure and a typical $U({\mathbf{r}})$ are illustrated in \[fig:IplusUplusConv\]. It has already successfully been applied to micron-sized colloidal particles in a one-dimensional rPEL; experiments and simulations yielded consistent results [@Hanes2012a].
Potentials $U({\mathbf{r}})$ obtained by convolving experimental speckle patterns $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ with the weight function $D^{\odot}(\mathbf{r})$ (Eqs. \[eq:ID\], \[eq:D\_particle\]) [@Loudiyi1992; @Pelton2004; @Jenkins2008; @Hanes2012a] are quantitatively investigated in the following. This allows us to test whether $p(I_{\text{D}}^\odot)=p(U)$ can be described by a Gamma distribution (Eq. \[eq:PIP\]) and to find an approximation for the parameter $M$.
{width="0.80\linewidth"}
On a qualitative level, $U({\mathbf{r}})$ appears washed out compared to the speckle pattern (\[fig:IplusUplusConv\]) due to the convolution with $D^{\odot}(\mathbf{r})$. The magnitude of the fluctuations is reduced and their characteristic length scale is increased, in particular if the particle is large. This is consistent with experimental observations (Fig. \[fig:rho\_xy\]).
The distribution of potential values $p(U)$ depends on the speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (Eq. \[eq:AS\]) and the effective particle area $A_{\text{m}}^{\odot} = (8\pi/9) R^2$ (Eq. \[eq:Aeffcirc\]). If $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ is kept constant, the effect of the particle area $A_{\text{m}}^{\odot}$ on $p(U)$ can be studied (\[fig:PURconv\]A). We consider particles with radii in the range $0.3~\upmu$m $\le R \le 5.0~\upmu$m, which are large enough to be observed with the microscope. A small $R$ or $A_{\text{m}}^{\odot}$ leads to an almost exponential distribution and develops into an approximately Gaussian distribution as $A_{\text{m}}^{\odot}$ increases. Correspondingly, for constant $A_{\text{m}}^{\odot}$ but decreasing $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$, a similar transition from an exponential to an approximately Gaussian distribution is observed (Fig. \[fig:PURconv\]B). More general, similar distributions are obtained for comparable $A_{\text{m}}^{\odot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (Fig. \[fig:PURconv\]C) and thus $p(U)$ appears to only depend on this ratio with its shape changing from an almost exponential distribution to an approximately Gaussian distribution upon increasing $A_{\text{m}}^{\odot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$.
For all $A_{\text{m}}^{\odot}$ and $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$, a Gamma distribution (Eq. \[eq:PIP\]) is fitted to the data. The Gamma distribution describes the distribution of potential values $p(U)$ well and only depends on $A_{\text{m}}^{\odot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$. Only small deviations are observed, similar to those reported before [@Ducharme2007]. They are attributed to the approximations leading to the Gamma distribution [@Goodman2007], e.g. a Gaussian instead of a top-hat beam and the presence of finite optical components and detector pixels, and a possible effect of the (very few) particles on the speckle pattern (\[sec:speckles\]).
![(Color online) Parameter $M^{-1}$, which quantifies the fluctuations of the potential $U({\bf r})$, as a function of the ratio of the effective particle area $A_{\text{m}}$ and the speckle area $A_{\text{S}}$, for different conditions (as indicated in \[fig:PURconv\]) as well as (magenta $\times$) cubic (instead of spherical) particles with different sizes, i.e. effective particle areas $A_{\text{m}}^\boxdot$, in the experimental condition BE 5$\times$, (red +, green +) spherical particles in an intensity pattern which has been smoothed over $5 \times 5$ and $10 \times 10$ pixels, respectively, and (blue asterisk) a point-like particle, i.e. no convolution, in an intensity pattern which has been binned over different numbers of pixels ($1\times1~\text{to}~50\times50$). The dashed grey and solid black lines represent predictions for a Gaussian beam and a square (Eq. \[eq:M2\]) and circular (Eq. \[eq:Mcirc\]) detector, respectively. []{data-label="fig:1overM"}](Speckle_fig10_1overM_v7.pdf){width="0.93\linewidth"}
The fit of the Gamma distribution to the data yields the parameter $M$ (Eq. \[eq:PIP\], Fig. \[fig:PURconv\]), which is related to the contrast $c$ and standard deviation $\sigma$. In the case of the potential, $\sigma$ represents the magnitude of the fluctuations or ‘roughness’ of the random potential $U({\bf r})$. Thus we consider $M^{-1} \sim \sigma^2$. Independent of the specific particle and speckle sizes, $M^{-1}$ only depends on $A_{\text{m}}^{\odot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ and decreases with increasing $A_{\text{m}}^{\odot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (\[fig:1overM\]). Thus, the magnitude of the fluctuations only depends on the number of speckles that interact with a particle.
The correlation area $A_{\text{S}}^{\text{U}}$ of the potential $U({\bf r})$ is obtained from the length at which the correlation function $C_U(\Delta r)$ decays to $1/{\text{e}}$ (Fig. \[fig:CIvsCU\], Tab. \[tab:speckles\]). The correlation area of the intensity or speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ decreases with increasing beam size and hence also $A_{\text{S}}^{\text{U}}$. The difference between both values is the effective correlation area of the weight function; $A_{\text{S}}^{\odot} = A_{\text{S}}^{U} - A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (Tab. \[tab:speckles\]). The value of $A_{\text{S}}^{\odot}$ only depends on the weight function $D^\odot({\bf r})$ as long as the effective particle area $A_{\text{m}}^{\odot}$ is larger than the speckle area $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$, consistent with the data (Tab. \[tab:speckles\]).
![(Color online) Intensity, $C_{\text{I}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}(\Delta r)$ (connected open symbols), and potential, $C_{\text{U}}(\Delta r)$ (connected filled symbols), correlation functions as a function of $\Delta r$, which are based on the intensity $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ and its convolution with the weight function $D^{\odot}(\mathbf{r})$ of a particle with radius $R=1.4\,\upmu$m, respectively (\[tab:speckles\]). The lengths at which the correlation functions decay to $1/{\text{e}}$ are related to the speckle areas $A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ and the correlation areas of the potential, $A_{\text{S}}^{\text{U}}$, respectively, and hence to the effective correlation area of the weight function $A_{\text{S}}^{\odot}$. []{data-label="fig:CIvsCU"}](Speckle_fig11_CIvsCU_v7.pdf){width="0.96\linewidth"}
### Convolution with Other Weight Functions
Instead of convolving the intensity $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ with the weight function of spherical particles, $D^{\odot}(\mathbf{r})$, it is convolved with the weight function of cubic particles, $D^{\boxdot}({\mathbf{r}})$, with different effective particle areas $A_{\text{m}}^\boxdot$. A similar $M^{-1}(A_{\text{m}}^{\boxdot}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ is obtained (\[fig:1overM\], magenta $\times$).This indicates that the precise shape of the particle is not crucial, as long as it has the same effective particle area $A_{\text{m}}$.
Furthermore, the effect of smoothing is investigated. The intensity $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ is smoothed using a filter before it is convolved with $D^{\odot}(\mathbf{r})$. The filter replaces each pixel’s intensity $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ by the average intensity of the $n \times n$ pixels surrounding the pixel, $I_{\text{D}}^{\text{\makebox(6,4)[lb]{\textcolor{black}{\FilledSquareShadowC}}}}({\mathbf{r}})$. Smoothing is equivalent to the convolution of the intensity with $D^{\boxdot}({\mathbf{r}})$ described above, but $I_{\text{D}}^{\text{\textcolor{black}{\FilledSquareShadowC}}}({\mathbf{r}})$ in addition is subsequently convolved with $D^{\odot}({\mathbf{r}})$. Both yield virtually identical results (\[fig:1overM\], red +, green +) , as long as the smoothing is taken into account in the calculation of $A_{\text{S}}^{U}$, i.e. $A_{\text{S}}^{U} = A_{\text{S}}^{\odot} + A_{\text{S}}^{\text{\textcolor{black}{\FilledSquareShadowC}}} + A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$. This becomes increasingly more significant as smoothing extends over larger areas.
Finally, $I_{\text{D}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}({\mathbf{r}})$ is binned into larger ‘meta pixels’ resulting in a larger effective measurement area $A_{\text{m}}^{{{}{black}{\OldBoxplus}}}$, but smaller number of (meta) pixels. This is in contrast to smoothing, where the number of pixels is maintained. The corresponding intensity $I_{\text{D}}^{{{}{black}{\OldBoxplus}}}({\mathbf{r}})$ mimicks a camera with larger but less pixels. Hence, the number of speckles in the effective measurement area is increased, $A_{\text{m}}^{{{}{black}{\OldBoxplus}}}/ A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} > A_{\text{m}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}} / A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$. Nevertheless, for a sufficient number of meta pixels (above about $20$), $p(I_{\text{D}}^{{{}{black}{\OldBoxplus}}})$ can be described in good approximation by a Gamma distribution (Eq. \[eq:PIP\], data not shown) and $M^{-1}(A_{\text{m}}^{{{}{black}{\OldBoxplus}}}/A_s^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}})$ shows the same dependence on $A_{\text{m}}^{{{}{black}{\OldBoxplus}}}/A_{\text{S}}^{\text{{\makebox(5,3)[lb]{\textcolor{black}{\FilledSmallSquare}}}}}$ (\[fig:1overM\], blue asterisk).
These findings suggest that the dependence of $M^{-1}$ on $A_{\text{m}}/A_{\text{S}}$ does not strongly depend on the experimental conditions as long as they are properly taken into account through $A_{\text{m}}$ and $A_{\text{S}}$. Thus, our experimental situation, namely a top-hat beam and a particle as ‘detector’, appears well approximated by a Gaussian beam and a square or circular detector. Indeed, $M$ as given by Eqs. \[eq:M2\] or \[eq:Mcirc\], which both only depend on the ratio $A_{\text{m}}/A_{\text{S}}$, reproduces our findings very well (\[fig:1overM\], lines). This confirms previous experimental results for similar, but not identical, speckle patterns and optical geometries [@Skipetrov2010; @Li2012]. We hence established an appropriate description of the statistics of the rPEL, $U({\mathbf{r}})$, imposed on the colloidal particles. In particular, the distribution of potential values can be characterized by a Gamma distribution (Eq. \[eq:PIP\]) and the parameter $M^{-1}$, quantifying the magnitude of its fluctuations, by Eq. \[eq:M2\] or \[eq:Mcirc\].
Conclusions {#sec:conclusions}
===========
We experimentally realize random potential energy landscapes exploiting the interaction of matter with light. Colloidal particles are investigated which act as ‘detectors’ in a random intensity pattern, that is laser speckles. The speckle pattern is produced using an optical set-up which is based on a special diffuser. The diffuser creates a top-hat beam containing a speckle pattern. This speckle pattern is quantitatively characterized. In the standard experimental conditions, the intensity distribution is found to follow an exponential distribution with the normalized standard deviation or contrast being close to one, which indicates that fully developed speckles are formed. Their size can be controlled through the size of the illuminating laser beam.
The interaction of the particle with the speckle pattern is described analogous to a detector recording the intensity. However, the intensity that is ‘detected’ by the particle represents an external potential that is imposed on the particle, the rPEL. It is found that the distribution of energy values of the rPEL can be described by a Gamma distribution and approximations for the standard deviation of the distribution are identified. Using these approximations, thus, the statistics of the rPEL can quantitatively be described. These relations together with the set-up, can be exploited to produce rPELs with the desired distribution of energy values and correlation lengths, where the shape of the distribution can be varied in a broad range, from exponential to Gaussian.
When colloidal particles are exposed to such an intensity pattern, that is an rPEL, their spatial arrangement and dynamics will be affected as demonstrated previously [@Hanes2012a; @Hanes2013; @Evers2013a; @Evers2013b] and in agreement with theoretical predictions [@Bouchaud1990; @Dean2007; @Sengupta2005; @Isichenko1992; @Goychuk2014; @Banerjee2014; @Wales2004; @Zwanzig1988]. In these previous studies, the speckle patterns have been created using a spatial light modulator [@Hanes2009]. Compared to this method, the present set-up offers a much larger field of view and thus the possibility to simultaneously observe a much larger number of particles. The distribution of potential energy values and their spatial correlation furthermore are tunable. In addition, the diffuser can be rotated and hence the speckle pattern varied. If this is faster than the particle dynamics, the particles experience a time-averaged and hence flat effective potential. Radiation pressure still pushes them towards the wall and the increased hydrodynamic interactions slow them down. Therefore, the effect of hydrodynamic wall–particle interactions can be determined independently.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank R. Capellmann, M. Escobedo-Sanchez, S. Glöckner, F. Platten, D. Wagner and C. Zunke for very helpful discussions and suggestions, and the Deutsche Forschungsgemeinschaft (DFG) for financial support within the SFB-TR6.
|
---
abstract: |
It has been shown that HD molecules can form efficiently in metal–free gas collapsing into massive protogalactic halos at high redshift. The resulting radiative cooling by HD can lower the gas temperature to that of the cosmic microwave background, $T_{\rm
CMB}=2.7(1+z)$K, significantly below the temperature of a few $\times 100$K achievable via ${\rm H_2}$–cooling alone, and thus reduce the masses of the first generation of stars. Here we consider the suppression of HD–cooling by UV irradiation in the Lyman–Werner (LW) bands. We include photo–dissociation of both ${\rm H_2}$ and HD, and explicitly compute the self–shielding and shielding of both molecules by neutral hydrogen, HI, as well as the shielding of HD by ${\rm H_2}$. We use a simplified dynamical collapse model, and follow the chemical and thermal evolution of the gas, in the presence of a UV background. We find that a LW flux of $J_{\rm crit,HD} \approx 10^{-22}{\rm erg
\; cm^{-2}\; sr^{-1}\; s^{-1} \; Hz^{-1}}$ is able to suppress HD cooling and thus prevent collapsing primordial gas from reaching temperatures below $\sim 100$K. The main reason for the lack of HD cooling for $J>J_{\rm crit,HD}$ is the partial photo-dissociation of ${\rm H_2}$, which prevents the gas from reaching sufficiently low temperatures ($T<150$K) for HD to become the dominant coolant; direct HD photo–dissociation is unimportant except for a narrow range of fluxes and column densities. Since the prevention of HD–cooling requires only partial ${\rm H_2}$ photo–dissociation, the critical flux $J_{\rm crit,HD}$ is modest, and is below the UV background required to reionize the universe at $z \sim 10-20$. We conclude that HD–cooling can reduce the masses of typical stars only in rare halos forming well before the epoch of reionization.
author:
- |
J. Wolcott-Green$^{1}$[^1] and Z. Haiman $^{2}$\
$^{1}$Barnard College, Columbia University, 3009 Broadway, New York, NY 10027\
$^{2}$Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY 10027
bibliography:
- 'HD.bib'
title: 'Suppression of HD–cooling in protogalactic gas clouds by Lyman-Werner radiation'
---
cosmology: theory – early universe – galaxies: formation – molecular processes
Introduction
============
The first generation of stars are believed to be much more massive ($\sim 100 {\rm M_\odot}$) than typical stars in stellar populations in the low–redshift universe ($\sim 1 {\rm M_\odot}$; @BCL02 [@ABN02]). This has many important consequences in the early universe, for reionization, metal–enrichment, the formation of seed black holes at very early times, and the observability of first-generation galaxies.
The high masses result from the thermodynamical properties of ${\rm
H_2}$, the main coolant in low–temperature gas with a primordial composition. In particular, ${\rm H_2}$–cooling becomes ineffective at temperatures below $\sim 200$K. HD molecules, can, in principle, cool the gas to much lower temperatures, but until recently, the abundance of HD in the early universe was believed to be too low for it to be important.
It has recently been pointed out that significant HD can form in metal–free gas, due to non-equilibrium chemistry, provided that the gas has a large initial electron fraction. This can occur, for example, in “fossil” gas that was ionized by a short-lived massive star, prior to it being extinguished, or in collisionally-ionized halos with virial temperatures above $\approx 10^4$K. It has been shown that the resulting radiative cooling by HD can then lower the gas temperature to values near that of the cosmic microwave background, $T_{\rm CMB}=2.7(1+z)$K, i.e. to $\sim 30$K at $z\sim 10$. This would decrease the expected masses of the stars that form in ionized halos by a factor of $\sim 10$ below that which is possible if HD-cooling is neglected [e.g. @JB06]. [^2] Thus, a second mode of star formation has been proposed, giving rise to Pop. III.2 stars [^3] that can form as soon as a small number of Pop. III.1 stars have initiated the epoch of reionization, and whose masses are only a few tens of solar masses (@NU02; @Mach05; @NO05; @JB06, @Rip07; @YOKH07a; @YOH07b).
These conclusions could potentially be revised, however, due to the suppression of HD–cooling by UV irradiation of the gas cloud. Although this possibility has been raised in the literature (e.g. @JB06; @YOH07b), previous work has not included a detailed treatment of of the impact of UV irradiation on HD–cooling, including photo-dissociation of HD by radiation in its Lyman and Werner (hereafter LW) bands, taking into account the shielding that occurs in the optically thick regimes. Such UV radiation will exist in the early universe, and can suppress ${\rm H_2}$–cooling in low–mass halos at high redshifts (e.g., @HRL97). [*The main goal of this paper is to assess whether HD–cooling can be similarly suppressed by UV radiation, and to compute the critical UV flux for the HD-destruction*]{}.
In order to do this, we perform “one zone” calculations with a simplified density evolution, while following the gas–phase chemistry and thermal evolution of the gas, including the impact of ${\rm H_2}$– and HD–dissociating LW radiation. In general, collapsing gas clouds become optically thick to this radiation, so that the effects of self–shielding are non-negligible. Our treatment includes self–shielding of HD and ${\rm H_2}$, shielding of both species by neutral hydrogen (HI), and shielding of HD by ${\rm H_2}$. We provide useful fitting formulae for these shielding factors, analogous to the case of ${\rm H_2}$ self–shielding studied by @DB96 (hereafter DB96).
The rest of this paper is organized as follows. In § \[sec:model\] we describe our chemical, thermal, and dynamical modeling. § \[sec:results\] presents our results on the critical flux required to suppress HD–cooling, followed by a brief discussion of the potential cosmological implications and primary uncertainties in § \[sec:discussion\]. We summarize our main results and offer our conclusions in § \[sec:conclusions\]. Throughout this paper, we adopt the standard ${\rm \Lambda CDM}$ cosmological background model, with the following parameters: ${\rm \Omega_{DM}=0.233,\;
\Omega_{b}=0.0462,\; \Omega_{\Lambda}=0.721,\; and}\; {\it h}= 0.701$ [@KDN09].
Model Description {#sec:model}
=================
The formation of HD occurs primarily through the following reaction sequence (e.g. @GP02): $${\rm H + e^- \rightarrow H^-} + {\it h\nu}
\label{eq:reaction1}$$ $${\rm H + H^- \rightarrow H_2 + e^-}
\label{eq:reaction2}$$ $${\rm D^+ + H_2 \rightarrow HD + H^+.}
\label{eq:reaction3}$$ Thus, in order to form a significant abundance of HD, a large initial electron fraction is required to catalyze the formation of ${\rm H_2}$ [see, e.g. @JB06 and references therein]. In primordial gas this can be achieved by photoionization (e.g. by short-lived Pop III.1 stars), or by collisional ionization (in sufficiently massive halos). We model the first case by a constant, low-density gas, initially at a density comparable that of the intergalactic medium (IGM) at high redshift, $n\approx10^{-7}(1+z)^3 \; {\rm cm^{-3}}$, and temperature $T\approx 10^4$ K. We model the second case by a pre-imposed density evolution obtained from the spherical collapse model.
One-Zone Spherical Collapse Model {#subsec:onezone}
---------------------------------
We adopt the model for homologous spherical collapse that has been used in several previous studies (e.g. @OSH08; hereafter OSH08). This simple one-zone treatment prescribes the density evolution of the baryonic and dark matter (DM) components of a collapsing halo. Both are initialized with zero velocity at the turnaround redshift, set throughout this paper to $z=17$. The density of the in-falling gas evolves on the free-fall timescale and that of the DM is given by a top-hat overdensity until virialization, after which it remains constant at its virial value. Compressional heating is included in the thermal model, along with the processes listed in § \[sec:model\].2.
Unless stated otherwise, we take the radius of the cloud to be $R_{\rm c} = \lambda_J/2$, where the Jeans length is given by $$\lambda_{\rm J} = \sqrt{\frac{\pi k_B T\rm_{gas}}{G\rho\rm_{gas}\mu m_{\rm p}}},$$ Here $k_B$ is Boltzmann’s constant, $T\rm_{gas}$ is the gas temperature, $\mu$ is the mean molecular weight, and $m\rm_p$ is the mass of the proton. Note that the size of the cloud is required, in practice, only in our calculations of the self–shielding factors (see below), in order to specify the column densities of ${\rm H_2}$, HD, and HI.
While this model is a vast simplification of the physics of a collapsing halo, it nonetheless has been shown to mimic the thermal and chemical evolution seen in full three-dimensional hydrodynamical simulations very well (see, e.g., @SBH10 – hereafter SBH10 – for a direct comparison). The exception is the shock-heating that occurs in the early stages of collapse and is not present in the one-zone model, which prescribes a smooth “free–fall” evolution. For a detailed description of the spherical collapse model, the reader is referred to the recent work by OSH08 and references therein.
Chemical and Thermal Model {#subsec:chemistry}
--------------------------
We model a gas of primordial composition using a reaction network which comprises 47 gas-phase reactions amongst the following 14 chemical species (and photons): ${\rm H,~ H^+,~H^-,
~ He,~ He^+,~ He^{2+},~ H_2,~ H_2^+,~D,~D^+,~ D^-,~HD,}$ ${\rm HD^+}$, and electrons. Our choices for the selection of species and their initial abundances are conventional [see, e.g., @GP98], but we do not include any lithium species or other potential coolants (e.g. ${\rm H_3^+}$), as they contribute very little to the total cooling [e.g. @GS09] and are not important in the context of this paper.
### Hydrogen and Helium Chemistry {#subsubsec:Hchemistry}
The collisional rate coefficients for reactions among hydrogen and helium species only, and cross-sections for photo-ionization, are taken from the recent compilation by SBH10. However, the rate for ${\rm H_2}$ photo-dissociation ($k_{28}$ in the aforementioned compilation) is modified to $k_{\rm diss,H_2}=
1.39 \times 10^{-12} \times \beta \times f_{\rm shield}$ in order to match the optically thin rate we calculate (see § \[subsec:opthin\]). The total shielding is parameterized by a shield factor, $f_{\rm shield}$ (see below), and the rate is normalized by the parameter $\beta$, as described by in Appendix A of OM01, which specifies the intensity of blackbody radiation at the average LW band energy ($12.4$ eV) relative to that at the Lyman limit ($13.6$ eV). For the two spectral types we consider (described in § \[sec:results\]), $\beta = 3$ for the T4-type and $\beta = 0.9$ for the T5-type.
### Deuterium Chemistry {#subsubsec:Dchemistry}
The chemical network includes 19 reactions involving the five deuterium species, for which we use the collisional rate coefficients from the compilation by @NU02. However, we replace the rates D2, D3, D7, and D9 given therein (as referenced in the source) with the corresponding updated rates from @Savin02 (for the charge exchange reactions, D2 and D3) and @GP02 (for D7 and D9). The HD photo–dissociation rate is given in § \[subsec:opthin\] and is normalized with the $\beta$ parameter in the same manner as described above for the ${\rm H_2}$ photo–dissociation rate.
We take the cosmological D/H ratio to be 4$\times 10^{-5}$ by number, following recent studies on HD–cooling (e.g. @JB06, @YOKH07) and inspired by the model of @GP98, which provides a value of D/H $=4.3\times 10^{-5}$. This adopted value is likely overgenerous for the primordial deuterium abundance, however, in light of recent observations, which place estimates of D/H at $2.78 ^{+0.44}_{-0.38}\times 10^{-5}$ [@Tytler03] and $2.82 ^{+0.27}_{-0.25}\times 10^{-5}$ [@Omeara06]. However, decreasing the initial deuterium abundance in our models leads to less robust HD–cooling, and so only serves to strengthen our central conclusion that metal-free gas is unlikely to be cooled, by HD, to temperatures close to $T_{\rm CMB}$.
In § \[sec:discussion\], we discuss recently updated rate coefficients for some of the most important reactions, and how their implementation affects our results.
### Thermal Model {#subsubsec:heatcool}
The following processes are included in the net cooling rate: collisional excitation and ionization (of H, He, and ${\rm He^+}$), recombination (to H, He, and ${\rm He^+}$), dielectric recombination (to He), Bremsstrahlung, Compton cooling,[^4] and molecular cooling by ${\rm H_2}$ and HD. In practice, the last two processes, as well as collisional excitation of HI, dominate in our calculations. We adopt the expression provided by @GP98 for ${\rm H_2}$ cooling. In the fossil gas case, the HD–cooling rate is calculated using the analytic fit for low densities ($n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}10^3 {\rm
cm^{-3}}$) given by equation (5) in @Lipov05. In the spherical collapse runs, we adopt the lengthier polynomial fit (equation 4 in the same source), which is accurate for gas densities $n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}10^8\; {\rm cm^{-3}}$.
We note that $T_{\rm {CMB}}$ is a “temperature floor,” below which gas cannot cool radiatively; if the gas temperature were below $T_{\rm
{CMB}}$, interaction with photons in the roto–vibrational bands would heat, rather than cool the gas. In order to mimic this behavior, we multiply $\Lambda_{\rm H_2}$ and $\Lambda_{\rm HD}$ by a correction factor $\left(T - T_{\rm {CMB}}\right)/ \left(T + T_{\rm
CMB}\right)$. This ensures that cooling is shut-off as the temperature approaches $T_{\rm CMB}$ from above (whereas the correction becomes negligible when $T\gg T_{\rm CMB}$; see @HRL96 and @JB06 for a somewhat more accurate approach).
Our thermal model includes heating from photo-detachment of ${\rm H^-}$, high energy electrons resulting from photo-ionization of helium [see, e.g., Appendix B in @HRL96], as well as compressional heating in the model of adiabatic collapse (see OSH08 and references therein for more details). In practice, the latter dominates in the regime of our calculations.
In order to follow the coupled chemical and thermal evolution of the gas we use the Livermore solver LSODAR to solve the stiff equations.
HD and ${\rm H_2}$ Photo-dissociation in the Optically Thin Limit {#subsec:opthin}
-----------------------------------------------------------------
HD and ${\rm H_2}$ can be dissociated by photons with energies in the range 11.2-13.6 eV, to which the universe is largely transparent even before the IGM is reionized. Although both HD and ${\rm H_2}$ have LW lines above 13.6 eV, we do not include photons above this energy, because they will have been absorbed by neutral hydrogen elsewhere in the IGM prior to reionization. Here we describe the details of the calculation for HD photo-dissociation; however, the calculation for ${\rm H_2}$ is entirely analogous, so the following applies equally well to both molecules.
Excitation of the HD molecule to its $B^1 \sum_u^+$ and $C^1 \Pi_u$ electronic states and subsequent radiative decay leads to dissociation when the system decays to the vibrational continuum of the ground state, rather than back to a bound state. Here we discuss the optically thin case, in which the processing of the LW spectrum by HD itself (as well as by ${\rm H_2}$ and HI) is assumed to be negligible. The dissociation rate for molecules initially in the electronic ground state with vibrational and rotational quantum numbers (${\it v,J}$) is given by: $$k_{{\rm diss,}{\it v,J}} = \sum_{\it v',J'} \zeta_{\it v,J,v',J'} {\it
f}_{{\rm diss,}{\it v',J'}}$$ where $\it{f}\rm_{diss,\it{v',J'}}$ is the dissociation probability from the excited state (${\it v',J'}$) and the pumping rate is given by $$\zeta_{\it{v,J,v',J'}} = \int_{\it{\nu_{th}}}^{\infty}4\pi\sigma_{
\nu}\frac{J_{\rm \nu}}{h_P \nu}{\rm d}\nu.$$ Here $\sigma_{\nu}$ is the frequency–dependent cross-section of a given transition, $h_P$ is Planck’s constant, and the specific intensity just below 13.6 eV is hereafter normalized as $J_{\nu}= J_{21} \times [10^{-21}\,{\rm erg \;
cm^{-2}\; sr^{-1}\; s^{-1} \; Hz^{-1}]}$. As mentioned above, in our model there is a sharp cut off in the radiation spectrum above 13.6 eV; the lower limit, ${\it \nu_{th}}$, is the frequency threshold, corresponding to the longest–wavelength photons included, $\lambda {\rm \sim 1105\; \AA}$.
In principle, the total dissociation rate also depends on the level populations of the molecule, which in turn depend on the incident radiation field as well as the temperature and density of the gas; thus, the total dissociation rate should be: $$k_{\rm {diss,tot}} = \sum_{\it {v,J}}k_{\rm {diss,{\it{v,J}}}} f_{v,J},
\label{eq:kdisstot}$$ where $f{\it_{v,J}}$ is the fraction of molecules initially in the ro-vibrational state denoted by ${\it {v,J}}$. For simplicity, we assume that all HD and ${\rm H_2}$ molecules are initially in the ground state (i.e. ${\it f_{v=0,J=0}} = 1$). This is a reasonable approximation for low gas densities, at which the populations of higher ro-vibrational states are very small. However, the level populations of ${\rm H_2}$ and HD reach their values at local thermodynamic equilibrium when gas densities rise to $n{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4~{\rm cm^{-3}}$, and $n{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^6~{\rm cm^{-3}}$ respectively. In § \[sec:discussion\], we discuss the differences in the dissociation rates if both molecules are assumed to be in LTE, and how this impacts the results discussed below.
We include 28 discreet spectral lines of HD and 25 of ${\rm H_2}$, all involving transitions from from the ground electronic state ${\rm X^1 \sum_{\it g}^+}$ to the $B^1 \sum_u^+$ and $C^1
\Pi_u^+$ excited states. We use the necessary data for the Lyman and Werner bands of HD provided by @AR06. For those of ${\rm H_2}$, the relevant data were taken from @ARLa93, and @ARLb93. We use the updated dissociation fractions for ${\rm H_2}$ in @ARD00. The numerical wavelength resolution in the calculations ($\Delta
\lambda = 5.8 \times 10^{-5}{\rm \AA}$ at the lowest temperatures) is sufficient to resolve the Voigt profile of each line and explicitly account for overlap of the Lorentz wings. We find the following photo–dissociation rates in the optically thin limit: $k {\rm_{{diss,HD}} =1.55 \times 10^9}J_{\bar{\nu}} \;{\rm s^{-1}}$, and $k {\rm_{{diss,H_2}} =1.39 \times 10^9}J_{\bar{\nu}} \;{\rm s^{-1}}$, in excellent agreement with those found previously by @GJ07: $k {\rm_{{diss,HD}} =1.5 \times 10^9}J_{{\bar\nu}} \;{\rm s^{-1}}$, and $k {\rm_{{diss,H_2}} =1.38 \times 10^9}J_{\bar{\nu}} \;{\rm s^{-1}}$. Here $J_{\bar{\nu}}$ denotes the intensity at the average LW band of HD and ${\rm H_2}$, with energy 12.4 eV, as discussed above.
Self-Shielding of HD and ${\rm H_2}$ {#subsec:selfshield}
------------------------------------
When sufficiently high column densities of HD or ${\rm H_2}$ build up, ($N_{\rm HD},N_{\rm H_2} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{13}\; {\rm cm^{-2}}$), the LW bands become optically thick and the rates of photo–dissociation are suppressed. We parameterize this effect by a shield factor, $f_{\rm
shield}$, akin to that given by DB96 in their study of ${\rm H_2}$ self-shielding. In particular, $f_{\rm shield,HD} \equiv k_{\rm diss,HD}(N_{\rm
HD})/k_{\rm diss,HD}(N_{\rm HD}=0)$ where $k_{\rm diss,HD}(N_{\rm
HD}=0)$ is the dissociation rate in the optically thin limit (equation \[eq:kdisstot\]), and the shield factor for ${\rm H_2}$, $f_{\rm shield,H_2}$, is analogously defined. Our treatment of ${\rm H_2}$ self–shielding differs from the previous study by DB96 in that we assume all ${\rm H_2}$ is in the ro-vibrational ground state (as described above), while the latter used a model allowing for populations in higher ro-vibrational levels due to collisional excitation and “UV pumping” by the incident radiation field. Nonetheless, we find that a good analytical fit for both ${\rm H_2}$ and HD self–shielding is provided by the same functional form as equation (37) for $f_{\rm shield,H_2}$ in DB96. We also find that the self–shielding behavior of the two molecules is nearly identical, (see Figure \[fig:selfshield\]), which might be expected on the basis of the similarity in their electronic structures [^5]; thus, we use the following fitting formula for both $f_{\rm shield,H_2} \left(N_{\rm H_2}, T\right)$ and $f_{\rm shield,HD}\left(N_{\rm HD}, T\right)$: $$\begin{gathered}
f_{\rm shield}\left(N, T\right) =
\frac{0.9379}{\left(1 + {\rm x}/{\rm D_5}\right)^{1.879}}
+ \frac{0.03465}{\left(1 + {\rm x}\right)^{0.473}}\\
\times \exp\left[-2.293 \times 10^{-4} \left( 1 + {\rm x}\right)^{0.5}\right],
\label{eq:selfshield}\end{gathered}$$ where ${\rm x} \equiv N/ 8.465 \times 10^{13} {\rm cm^{-2}}$, $N$ is the column density of the self-shielding species, ${\rm D_5 \equiv b_{D}/ 10^5 {\rm cm~s^{-1}}}$, and the Doppler broadening parameter, ${\rm b_{D}}$, depends on the mass of the molecule (which accounts for the slight difference in the self-shielding formula for the two molecules), as well as the temperature.
![Solid (blue) and broken (green) curves show the numerically calculated self–shielding factors for HD and ${\rm H_2}$ respectively; these are defined as the ratio of the dissociation rate at a given column density and the optically thin dissociation rate, and are shown as functions of the respective column densities. Dashed (orange) and dotted (magenta) curves show the values obtained by using the fitting formulae from equation \[eq:selfshield\] for HD and ${\rm H_2}$ self–shielding, respectively (the slight difference here arises only because of the different masses of the two molecules, which modifies the Doppler parameter). All are shown at $T=200$ K.[]{data-label="fig:selfshield"}](fig1.eps){width="3.3in"}
Shielding by HI and Mutual Shielding of ${\rm H_2}$ and HD {#subsec:shielding}
----------------------------------------------------------
In addition to self-shielding, HD and ${\rm H_2}$ can also shield each another, and both can be shielded by HI, which has absorption lines in the range 11.2-13.6 eV; thus, suppression of the photo-dissociation rates depend on the relative strengths and positions of the HD, ${\rm H_2}$, and HI lines, as well as the column densities of each species, $N_{\rm HD}$, $N_{\rm H_2}$, and $N_{\rm HI}$.
We include the first nine Lyman HI lines in the relevant wavelength range; while the line center of the Ly$\alpha$ line is outside this range, we nevertheless include it, as its contribution to the shielding becomes important due to line broadening at high HI column densities. For illustration, in Figure \[fig:linelist\] we show the positions and strengths of the most significant lines of each species.
![The figure illustrates the wavelengths and strengths of the relevant LW lines of ${\rm H_2}$ and HD, as well as the Lyman series of HI. The solid (purple) and dashed (green) lines indicate the product of the oscillator strength and dissociation fraction for HD and ${\rm H_2}$ respectively, drawn at the position of each line center. The wavelengths and oscillator strengths of the hydrogen Lyman series are similarly shown by the blue dash-dot lines (a dissociation fraction in this case is not applicable). At high HI column densities, the HD lines that dominate the dissociation rate are those at at $\sim 960,\sim 980$, and $\sim 990$ Å, and the dominant contributions to the ${\rm H_2}$–dissociation rate are made by the absorption lines at $\sim 945,\sim 980$, and $\sim 990$ Å.[]{data-label="fig:linelist"}](fig2.eps){width="3.3in"}
### Shielding of HD by $H_2$ and HI {#subsubsec:HDshield}
Taking shielding into account, the contribution to the HD–dissociation rate for a particular line at frequency $\nu$ becomes: $$k_{\rm diss,HD,\nu}(N_{\rm HD}) = k_{\rm diss,HD,\nu}(N_{\rm HD}=0)
\;{\rm \exp(-\tau_{\nu})},$$ where the optical depth is given by $$\tau_{\nu}=\sigma_{\rm HD,\nu} N_{\rm HD} + \sigma_{\rm HI,\nu}
N_{\rm HI} + \sigma_{\rm H_2,\nu} N_{\rm H_2}.
\label{eq:tau_nu}$$
While HD self-shielding becomes important for $N_{\rm HD}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{13}\; {\rm cm^{-2}}$, the offsets in the wavelength of the neighboring absorption lines (typically of order $\sim$Å) prevent ${\rm H_2}$ and HI from effectively shielding HD until their column densities are very high, $N_{\rm H_2}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{20}\; {\rm cm^{-2}}$, and $N_{\rm HI}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{23}\; {\rm cm^{-2} }$. At these critical densities, which are essentially independent of $N_{\rm HD}$, the ${\rm H_2}$ lines start to significantly overlap and the optical depth due to ${\rm H_2}$-shielding is $\sim$ a few at all wavelengths. In Figure \[fig:columns\], we show the evolution of all three column densities, as a function of the particle number density, in our one-zone collapse runs with $J=0$ for halos with virial temperatures both above and below $10^4$K (see below). This figure shows that all three column densities reach the values where shielding becomes efficient, and thus a full calculation of the combined self-shielding is warranted.
![Column densities reached in the spherical collapse runs for halos with virial temperatures above and below $10^4$K, shown by solid and dotted lines respectively. Both runs assume no background flux ($J=0$).[]{data-label="fig:columns"}](fig3.eps){width="3.3in"}
We provide a fitting formula to model the total shielding of HD, $f_{\rm shield, HD}= f_{\rm shield, HD}(N_{\rm HD},
N_{\rm H_2},N_{\rm HI},T)$ in Table 1 and equations \[eq:shieldproduct\], and \[eq:shield2\] below, which is accurate to within a factor of two over a wide range of column densities, i.e. up to $N_{\rm HD}\approx
10^{20}\; {\rm cm^{-2}},\; N_{\rm H_2}\approx 10^{22} \; {\rm
cm^{-2}},\; N_{\rm HI}\approx 10^{24} \; {\rm cm^{-2}}$ and gas temperatures up to $\approx 10^3$ K (HD–cooling is unimportant at temperatures above this value in any case):
$$f_{\rm shield,HD} = f_{\rm shield}\left(N_{\rm HD},T\right)
\times f_1\left(N_{\rm HI}\right) \times f_2\left(N_{\rm H_2}\right)
\label{eq:shieldproduct}$$
$$f_i = \frac{1}{\left(1 + {\rm x}_i\right)^{{\rm \alpha}_i}}
\times \exp\left({\rm-\beta}_i\;{\rm x}_i\right).
\label{eq:shield2}$$
Here ${\rm x}_i \equiv N_i/\gamma_i$ and the index $i$ takes the value $i=1$ or 2 to denote the relevant quantity for HI or ${\rm H_2}$ respectively. The coefficients $\alpha$, $\beta$, and $\gamma$ are given in Table 1.
[0.475]{}[@ l l l l ]{} Species & $\alpha$ & $\beta$ & $\gamma$ (cm$^{-2}$)\
\
1. [HI]{} & 1.620 & 0.149 & $2.848 \times 10^{23}$\
2. ${\rm H_2}$ & 0.238 & 0.00520 & $2.339 \times 10^{19}$\
\[table:fits\]
This fit is described by a product of three separate functions, $f_{\rm
shield, HD}(N_{\rm HD},T),\; f_{\rm shield, H_2}(N_{\rm H_2}),$ and $f_{\rm shield, HI}(N_{\rm HI})$, which represent the shielding of HD due to each of the three species alone (eq. \[eq:shieldproduct\] above). Note that since the ${\rm H_2}$ and HI lines shield HD by their Lorentz wings, rather than their thermal cores, these factors (unlike HD self-shielding) do not depend on temperature.
In general, one does not expect that the combined shielding factor is separable into a simple product of the three individual shielding factors. The full expression for $f_{\rm shield, HD}(N_{\rm HD},N_{\rm H2},N_{\rm
HI},T)$ is a sum over all of the individual cross-sections of the HD lines, each suppressed by the frequency-dependent total optical depth (eq. \[eq:tau\_nu\]), divided by the optically thin rate. However, we have found that in practice, when suppression by nearby HI lines is negligible ($N_{\rm HI} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}10^{23}~{\rm cm^{-2}}$), one HD line is much stronger than all others (at ${\rm \approx
950 \AA}$, see Figure \[fig:linelist\]). Since this single line dominates the dissociation rate, the shield factor reduces to the simple product (eq. \[eq:shieldproduct\] above). In the regime of relatively strong HI shielding ($N_{\rm HI} \approx 10^{24}~{\rm cm^{-2}}$), a few HD Lyman lines together dominate the dissociation rate. However, we find that the product $\sigma_\lambda\times
f_{\rm diss,\lambda}$ for these lines (at $\sim$960, $\sim$980, and $\sim$990 Å) are similar. If we approximate that these lines have identical strengths, the total shielding factor again reduces to the simple product in equation \[eq:shieldproduct\] above. Because these line strengths are not precisely equal, the largest discrepancies in the product formula and ‘true’ shielding behavior are seen at $N_{\rm HI} = 10^{24}~{\rm cm^{-2}}$. However, in general, we find that this simple product is accurate, to within a factor of $\sim$two, at the low temperatures ($T{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}200$ K) and the high column densities of interest.
Figure \[fig:shieldfits\] shows the results of the exact shielding factor calculations for a gas temperature of $T=200$K, and compares these to the analytical fits for a number of combinations of the three column densities. The largest deviations are seen in the bottom panel, for $N_{\rm HI} = 10^{24}~{\rm cm^{-2}}$, and at ${\rm H_2}$ and HD column densities of $N_{\rm H_2} = 10^{22}~{\rm cm^{-2}}$ and $10^{14}~{\rm cm^{-2}} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}N_{\rm HD} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}10^{15.5}~{\rm
cm^{-2}}$. In this regime, the accuracy of the fitting formulae is somewhat worse than a factor of two. However, in practice, this column density combination – with relatively low $N_{\rm HD}$ and exceedingly high values of both $N_{\rm H_2}$ and $N_{\rm HI}$ – does not occur in our calculations (see Figure \[fig:columns\]).
![The combined HD shielding factor, including self–shielding and shielding by ${\rm H_2}$ and HI. Several combinations of column densities are shown, as labeled, near the critical column densities for HI and ${\rm H_2}$ shielding. The solid curves show the exact numerical calculations, and the dotted curves show the values obtained from a fitting formula (equations \[eq:selfshield\], \[eq:shieldproduct\], and \[eq:shield2\] and Table 1).[]{data-label="fig:shieldfits"}](fig4a.eps "fig:"){height="2.8in" width="3.4in"} ![The combined HD shielding factor, including self–shielding and shielding by ${\rm H_2}$ and HI. Several combinations of column densities are shown, as labeled, near the critical column densities for HI and ${\rm H_2}$ shielding. The solid curves show the exact numerical calculations, and the dotted curves show the values obtained from a fitting formula (equations \[eq:selfshield\], \[eq:shieldproduct\], and \[eq:shield2\] and Table 1).[]{data-label="fig:shieldfits"}](fig4b.eps "fig:"){height="2.8in" width="3.4in"} ![The combined HD shielding factor, including self–shielding and shielding by ${\rm H_2}$ and HI. Several combinations of column densities are shown, as labeled, near the critical column densities for HI and ${\rm H_2}$ shielding. The solid curves show the exact numerical calculations, and the dotted curves show the values obtained from a fitting formula (equations \[eq:selfshield\], \[eq:shieldproduct\], and \[eq:shield2\] and Table 1).[]{data-label="fig:shieldfits"}](fig4c.eps "fig:"){height="2.8in" width="3.4in"}
### Shielding of $H_2$ by HI and HD {#subsubsec:H2shield}
The shielding of ${\rm H_2}$ by HD and HI is entirely analogous to that discussed in the preceding section, so we will limit the discussion here to a few noteworthy points. Most importantly, we find that HI shielding of ${\rm H_2}$ is nearly identical to HI shielding of HD; accordingly, we model both with the fitting formula for $f_{\rm shield,HI}
ß(N_{\rm HI})$, given by equation \[eq:shield2\] and Table 1. The explanation for this is similar to that given above; namely, the relative positions of the ${\rm H_2}$ and HI lines is such that only the wings of the HI lines shield ${\rm H_2}$ when the column densities of $N_{\rm HI}$ are sufficiently large ($N_{\rm HI}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{23}$). Because the ${\rm H_2}$ and HD lines are comparably spaced relative to the HI lines, the shielding effect of HI should indeed be similar for both.
When the HI column is below the critical level for strong shielding of ${\rm H_2}$, we find that a few Lyman lines together dominate the dissociation rate, and that the product $\sigma_\lambda\times f_{\rm diss,\lambda}$ for these lines (at $\sim$945, $\sim$980, and $\sim$990 Å) are similar. At larger neutral hydrogen columns ($N_{\rm HI}\approx 10^{25}$), a single Lyman ${\rm H_2}$ line makes the dominant contribution – by a large margin over all others – to the dissociation rate. Thus, the total shielding factor can again be simply modeled by a product of the shielding formulae, given in equations \[eq:selfshield\], \[eq:shield2\], and Table 1: $$f_{\rm shield,H_2} = f_{\rm shield}\left(N_{\rm H_2},T\right)
\times f_1\left(N_{\rm HI}\right)
\label{eq:shieldproductH2}$$ The results of the exact shielding factor and comparison to this analytical fit are shown in Figure \[fig:shieldfitsH2\] for a gas temperature of $T=200$K.
In principle, HD can also shield ${\rm H_2}$, but in practice this effect will likely always be negligible, as the HD column density is typically dwarfed by those of ${\rm H_2}$ and HI [^6]. Thus, our treatment does not include HD shielding of ${\rm H_2}$.
![The combined ${\rm H_2}$ shielding factor, including self–shielding and shielding by HI. Several combinations of column densities are shown, as labeled, near the critical column densities for HI shielding. The solid curves show the exact numerical calculations, and the dotted curves show the values obtained from a fitting formula (equations \[eq:selfshield\],\[eq:shield2\], \[eq:shieldproductH2\] and Table 1).[]{data-label="fig:shieldfitsH2"}](fig5.eps){height="2.8in" width="3.4in"}
Results {#sec:results}
=======
To assess whether HD-cooling can be suppressed by a persistent LW background, we ran our one-zone models at various different specific intensities $J_{21}$. Unless stated otherwise, the spectrum of the radiation is modeled as a black–body with a temperature of $10^5$ K, approximating the hard spectrum expected to characterize Pop III.1 stars (@TS00; @BKL01; @S02). For comparison, in § \[subsubsec:T4\] below, we investigate the effects of illumination by a cooler blackbody, $T \sim 10^4$K, intended to represent the softer spectrum of a more typical metal–enriched stellar population. These are referred to hereafter as types ‘T5’ and ‘T4’ respectively (SBH10).
We use a Newton-Raphson scheme to determine the strength of the LW radiation required to keep the gas temperature greater than a factor of $\sim 2$ above that reached in the absence of any LW radiation, $T_{{\rm min},J=0}$, on the timescales described below; this is referred to hereafter as the critical intensity: $J_{\rm
crit,HD}$ (in the usual units of $10^{-21}\,{\rm erg \; cm^{-2}\;
sr^{-1}\; s^{-1} \; Hz^{-1}}$).
HD-Cooling in Constant-Density Fossil HII Gas {#subsec:fossils}
---------------------------------------------
The first of the physical scenarios we consider is a “fossil” HII region, which could occur in a patch of the low–density IGM that has been photo-ionized and heated by a short–lived massive star [see, e.g. @OH03], or possibly in a denser shell of primordial gas, compressed by shocks from a supernova (SN).
We are interested in whether gas with such fossil ionization can cool efficiently in the presence of a LW background and return, via HD cooling, to a state close to its initial low-entropy state, prior to the ignition of the ionizing source (or prior to its shock heating).
In a first set of runs, we assume that the number density remains constant at the low value of $n = 10^{-2}\;{\rm cm^{-3}}$, characteristic of a slightly over-dense (by $\sim$ a factor of 10) region of the IGM at $z=10$. We find that in such a rarefied patch, the gas is not able to cool on a realistic timescale, because the HD–cooling time is longer than the present age of the universe even in the absence of any LW background. The thermal evolution of the gas in this case is shown by the right set of (purple) curves in the upper panel of Figure \[fig:fossil\]. The corresponding fractional abundances of electrons, ${\rm H_2}$, and HD are shown in the lower panel of the same figure. The figure extends to a total elapsed time that exceeds the Hubble time, and shows that there is, technically, a critical flux ($J_{\rm crit,HD}\approx 10^{-6}$) that would suppress the HD–cooling that would otherwise occur after a few $\times 10^{18}$ seconds. This, of course, is unphysical, and in practice, the question of HD–cooling being suppressed by UV radiation is moot for such low–density gas. Nevertheless, the figure illustrates the chemical/thermal behavior, and also provides a useful check on our code (see below).
In the second set of runs, the total particle number density was set to a higher value of $n = 10^{2}\;{\rm cm^{-3}}$. This is an unphysically high density for a characteristic “flash–ionized” fossil region in the low–density IGM, but may represent primordial gas compressed by SN shocks. In the no-flux case, we reproduce the main result of @JB06, namely that HD-cooling allows the gas to reach the temperature of the CMB in a time that is shorter than the Hubble time. This gas, however, is optically thin to radiation in the LW bands, and dissociation of HD (as well as of ${\rm H_2}$) is efficient. We find that for $J_{21} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{-2}$, the gas cannot cool to temperatures less than $\sim 200$ K. This is shown in the top panel of Figure \[fig:fossil\] by the left set of (blue) curves.
The above two cases ($n = 10^{-2}$ and $n = 10^{2}~{\rm cm^{-3}}$) serve to illustrate an important point (and a check on our code). All of the relevant timescales for the system, including the formation timescale for HD ($t_{\rm form}$), the HD cooling time ($t_{\rm cool}$), and the (HII+$e\rightarrow$HI) recombination time ($t_{\rm rec}$), scale as 1/[*n*]{}. The exception is the photo–dissociation timescale, which scales with the flux strength, $t_{\rm diss}\propto J_{21}^{-1}$, in the absence of shielding. Thus, the history of the system should only depend on $J_{21}/n$ when the time is rescaled accordingly.[^7] This simple scaling is evident by the two sets of (purple and blue) curves in the top panel of Figure \[fig:fossil\]. Most importantly, there is indeed a critical flux that prevents the gas from reaching the CMB temperature by HD cooling; in constant density gas with a high initial electron fraction, we find the value of this flux is $J_{\rm
crit,HD}=3.8 \times 10^{-3}(n/10^2~{\rm cm^{-3}})$.
Finally, an interesting question is whether HD–cooling prevented, for the cases in which $J_{21}$ exceeds the critical value, by direct photo–dissociation of HD, or the inability of sufficient HD to form due to ${\rm H_2}$ photo–dissociation. To answer this question, we performed runs in which the ${\rm H_2}$–dissociation was artificially turned off. In these runs, we find that the gas is still able to cool to $T {\rm_{CMB}}$ for $J_{21} \approx 4 \times 10^{-2}$, illustrating that the LW flux prevents HD–cooling [*via $H_2$ destruction*]{}, rather than via direct HD–dissociation. This point has been discussed by previous authors, e.g. @NU02 showed that a critical abundance of ${\rm H_2}$, $x_{\rm H_2} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{-3}$ is required for the gas to reach sufficiently low temperatures for HD to become the dominant coolant ($T {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}150$K), and therefore ${\rm
H_2}$ dissociation can prevent HD–cooling [@YOH07]. The minimum temperature reached by fossil ionized primordial gas in the absence of HD, and the dependence of this minimum temperature on gas density and LW flux was also discussed in detail by @OH03.
HD Cooling in Collapsing Halos {#subsec:collapse}
------------------------------
### HD Cooling in Halos with $T_{\rm vir}>10^4$K {#subsubsec:collapse}
It has been shown that primordial gas in the late stages of runaway gravitational collapse can reach temperatures close to $T_{\rm CMB}$ via HD-cooling, provided that a large initial ionization fraction exists (e.g. @JB06; see also @Mach05).
This scenario can be realized in sufficiently massive halos, which are collisionally ionized upon shock-heating to their virial temperatures. The post–shock gas can cool faster than it recombines, leaving a large out-of-equilibrium electron fraction to catalyze both ${\rm H_2}$ and HD formation [e.g. @SK87; @Susa+98; @OH02]. It may also be the case that “pre-ionized” halos exist within fossil HII regions, which will undergo a phase of efficient HD–cooling upon collapse [@JB06]. This scenario, however, is less plausible: a halo large enough to remain bound once photo-heated (to $T{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4$K) may be difficult to completely ionize, as the large HI column densities will lead to non-negligible HI self-shielding [@Dijkstra+04]; flash–ionization by a single short–lived star (required to allow subsequent recombination and cooling) is even less likely.
![Thermal evolution of initially ionized gas, collapsing in a massive halo ($T_{\rm vir} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4$K) exposed to LW backgrounds in the range near the critical value, $J_{\rm crit,HD} = 3.6 \times 10^{-1}$ (solid curves). The incident spectrum is that of a blackbody with a temperature of $10^5$K, characteristic of massive Pop III.1 stars. The turnaround redshift is set to $z = 17$ and the temperature is initialized $T \approx 10^4$K. The dashed curves show, for comparison, the temperature evolution in deuterium-free gas, exposed to the same LW fluxes. The dotted curves show the thermal evolution when HD-dissociation is artificially switched off, for the same values of $J_{21}$, as labeled. This illustrates that, except in a small range of flux intensities, the destruction of ${\rm H_2}$ by LW photons, rather than direct HD photo–dissociation, is the primary factor determining the minimum temperature reached in the gas. The temperature of the CMB is shown by the blue dotted curve.[]{data-label="fig:collapse"}](fig7.eps){height="3.45in" width="3.3in"}
Regardless of the nature of the initial ionization, Figure \[fig:collapse\] shows the thermal evolution of the collapsing gas exposed to LW backgrounds of various intensities. The initial number density is set to the characteristic baryon density in halos upon virialization, $$n \simeq 0.3\; {\rm cm^{-3} \left(\frac{1+z_{vir}}{21}\right)^3},$$ and the gas begins cooling from the temperature $T\approx 10^4$K (quickly established either by a period of photo–heating, or by shock–heating to near the virial temperature, accompanied by rapid HI cooling).
Of primary importance in this case – as opposed to the fossil gas discussed in the previous section – are the large column densities of HD and ${\rm H_2}$ that build up and shield both populations against the LW background. Furthermore, the collapse itself leads to more efficient formation of both molecules because the formation timescale, as mentioned above, scales as $t_{\rm form} \propto
1/n$. Consequently, the critical intensity should be larger than that found for the low-density fossil gas. This is indeed borne out by our results; nevertheless, as the comparison between the solid and the dashed curves (the latter representing deuterium–free gas) in Figure \[fig:collapse\] shows, the effect of HD–cooling is still almost entirely erased for the relatively low values of $J{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}1$. The critical intensity in this case, as defined above, is found to be $J_{\rm crit} = 3.6\times 10^{-1}$.
[*This threshold value is most notable for being approximately five orders of magnitude lower than the critical flux required to completely suppress ${H_2}$–cooling in the same halos.*]{} As shown in SBH10, the latter critical flux in halos with $T_{\rm vir}{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4$K is $J_{\rm crit,H_2} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4$. This large critical flux corresponds to the value that results in an ${\rm
H_2}$–photo–dissociation rate that matches the ${\rm H_2}$ formation rate, at the critical density of $n\sim 10^4~{\rm cm^{-3}}$ of ${\rm
H_2}$ (see the earlier work by O01 for a detailed discussion of the physics determining $J_{\rm crit,H_2}$ in primordial gas without ionization/shock–heating). This $J_{\rm crit,H_2} \approx
10^4$ separates gas in which ${\rm H_2}$–cooling is [*fully*]{} suppressed (with the gas temperature remaining near $\sim 10^4$K) and halos in which ${\rm H_2}$ cooling significantly lowers the temperature. A point that was also found (but not emphasized) by SBH10 (and also by O01) is that even for $J_{21}$ well below $J_{\rm crit,H_2}$, the minimum temperature to which the gas can cool via ${\rm H_2}$ can be significantly elevated. This is also clearly visible in the deuterium–free runs in Figure \[fig:collapse\]: the minimum temperature is $\sim 150$K for $J_{21}=0$, but is elevated to $\sim 300$K already for $J_{21}=1$.
The behaviour of the gas, and the reason for the elevated temperature, can be described as follows (see a detailed discussion in the constant–density case in @OH02). Starting from $T\approx
10^4$K, the gas initially cools via ${\rm H_2}$ and recombines on time–scales much shorter than either the photo–dissociation or the free–fall timescale. However, when the temperature is lowered to a $J$–dependent critical value of a few$\times10^3$K, ${\rm
H_2}$–dissociation becomes important, limiting the ${\rm H_2}$ abundance, and reducing the cooling. The cooling time eventually becomes comparable to the free–fall time, resulting in the sharp turn away from the nearly vertical directions of the $n-T$ curves at the initial density in Figure \[fig:collapse\]. For higher fluxes, this subsequently results in an elevated gas temperature (at fixed density). Eventually, the compressional heating rate becomes equal to the ${\rm
H_2}$–cooling rate, setting the temperature minimum. It is worth noting that for sub–critical values of $J_{21} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}J_{\rm crit,HD}$, HD is able to cool the gas to temperatures below $150$K, but the LW background still has the subtle effect of raising the minimum temperature to which the gas can cool via ${\rm HD}$.
As in the constant density case, an interesting question is whether ultimately the HD–cooling is controlled by direct photo–dissociation of HD, or by ${\rm H_2}$–dissociation. To answer this question, we repeated the runs shown in Figure \[fig:collapse\] under the same conditions, but with HD dissociation artificially switched off (the results are shown by the dotted lines in Figure \[fig:collapse\]). In this case, we find again that (except in a narrow range of LW intensities near $J_{21} \sim 4\times 10^{-1}$), direct photo–dissociation of HD does not determine the minimum temperature reached in the gas. Rather, it is the diminished abundance of ${\rm H_2}$ in the presence of the LW background that regulates the abundance and thereby the cooling efficiency of HD.
![Thermal evolution of gas collapsing in a massive halo, exposed to different LW backgrounds, as in Figure \[fig:collapse\], except the incident spectrum is that of a softer blackbody with temperature $T=10^4$K, representing a more typical metal-enriched stellar population. This softer spectrum contains many more photons down to the photo–dissociation threshold of ${\rm H^-}$ at 0.76eV. The enhanced rate of ${\rm H^-}$ photo–dissociation reduces the critical flux that prevents HD–cooling by more than an order of magnitude compared to the harder spectrum, to $J_{\rm crit,HD}
= 10^{-2}$.[]{data-label="fig:collapseT4"}](fig8.eps){height="3.45in" width="3.3in"}
We have also investigated the thermal evolution of the gas when ${\rm H_2}$ is artificially prevented from dissociating (not shown), but the physical set-up is otherwise analogous to the runs (in which deuterium is included) shown in Figure \[fig:collapse\]. This “academic” exercise is useful in order to determine the critical flux that would prevent HD-cooling entirely by direct HD photo–dissociation. In the analogous case for ${\rm H_2}$, as mentioned above, $J_{\rm crit, H_2}$ is traditionally defined as the specific intensity capable of [*completely*]{} suppressing ${\rm H_2}$-cooling, thereby preventing the gas from falling below the temperatures reached by atomic line cooling, $T \sim
8 \times 10^{3}$K (O01, OSH08, SBH10). We find that for $J {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}6\times 10^4$, the cooling history of the gas is nearly identical to that of deuterium-free gas in the absence of a LW background. Thus, the intensity required to fully suppress HD–cooling by direct HD–dissociation is comparable to that of ${\rm H_2}$ (the latter was found by the latest studies (SBH10) to be $\sim 1.2 \times 10^4$, but we find a factor of $\sim$ five greater critical value for ${\rm H_2}$; see below). In fact, this is not surprising, given that the photo–dissociation timescales ($t_{\rm diss} = [k_{\rm diss}
(N=0) \times f_{\rm shield}]^{-1}$) are very similar for the two molecules.
### $T_{\rm vir}>10^4$K Halos Illuminated by ‘T4’ Radiation {#subsubsec:T4}
Up to this point, we have considered only incident radiation with the hard spectrum expected to characterize Pop III.1 stars (‘T5’). In the case of a collapsing halo, it is reasonable to ask about the effects of irradiation by the more the typical stellar spectrum (‘T4’), on HD-cooling.
Figure \[fig:collapseT4\] shows the temperature evolution in the gas irradiated by a T4 spectrum of various intensities. It is clear that HD is considerably more fragile in the presence of this softer spectrum, withstanding a LW flux no greater than a feeble $J_{\rm
crit,HD} = 10^{-2}$. This is not surprising in the light of studies that have found the same effect for ${\rm H_2}$ (O01; OSH08; SBH10), namely, that ${\rm H_2}$–cooling is much more effectively suppressed by the T4 type spectrum. The reason is the diminished abundance of hydride (${\rm H^-}$), an intermediary in the formation of both ${\rm H_2}$ and HD. Hydride, whose ionization threshold is 0.76eV, is more efficiently photo-dissociated by the softer spectral type (O01, SBH10). This again is a manifestation of how an external radiation field can regulate HD-cooling [*via*]{} the destruction of an intermediate in its formation pathway.
### HD Cooling in $T_{\rm vir}<10^4$K Halos {#subsubsec:minihalos}
It has long been known that pristine gas in the first minihalos cannot form sufficient ${\rm H_2}$ to cool below a few hundred Kelvin [e.g. @HTL96]. Because free electrons act to catalyze the formation of HD, as well as ${\rm H_2}$, it is not surprising that HD abundances remain too low in such halos to play a significant role in cooling [e.g. @JB06 and references therein]. This is borne out by our results, shown in Figure \[fig:minihalo\], for a halo that begins collapsing from a temperature of $T \approx 20$K at the turnaround redshift, $z = 17$, and is illuminated by a ‘T5’ spectrum. This is the same configuration as in Figure \[fig:collapse\], except that the initial temperature is assumed to be low (i.e., lacking any strong shocks able to collisionally excite or ionize the gas).
We include a discussion of this scenario for the purpose of highlighting a few noteworthy points. First, we find a critical flux for full ${\rm H_2}$–dissociation in this case (as it is traditionally defined, see § \[subsubsec:collapse\] above) of $J_{\rm crit,H_2} = 6.1 \times 10^{4}$, which is a factor of $\sim$ five greater than that found in the recent study by SBH10. This difference owes to our use of the analytic fit for ${\rm H_2}$ self–shielding (equation \[eq:selfshield\]), which assumes all molecules are initially the ground state, while SBH10 used the shield factor fit (equation 36) from DB96, which gives a very good approximation to the self–shielding when the ${\rm H_2}$ roto–vibrational levels reach LTE populations. This illustrates an important point that will be discussed in greater depth in §\[sec:discussion\]; namely, self–shielding is [*less*]{} effective when higher ro-vibrational states of the molecule are populated (see more discussion in § \[sec:discussion\] below). It is also worth noting that we find $J_{\rm crit,H_2} =
6 \times 10^{3}$ – a factor of 2 [*lower*]{} than that found by SBH10 – if the self-shielding is modeled instead with the more accurate formula provided by DB96 (equation 37), rather than the power-law fit (equation 36) used by SBH10. In general, the “real” LW intensity required to kill ${\rm H_2}$-cooling entirely will depend on the detailed ro-vibrational distribution of ${\rm H_2}$ molecules. The values of $J_{\rm crit,H_2}$ found here using the ground-state [*vs*]{} LTE shielding treatments serve to bracket the range for the true critical threshold, with the former approximating the upper limit on $J_{\rm crit,H_2}$. The threshold flux in this case is of particular interest as it has important implications for the number of halos that remain ${\rm H_2}$–poor, and thus for the abundance of halos collapsing directly to massive seed black holes, because such halos probe the rare, exponential tail of the fluctuating UV background [@Dijkstra+08]. (The interested reader is encouraged to see SBH10 for a detailed discussion of these issues.)
Second, while there is a clear bifurcation in the cooling history of the gas around the $J_{\rm crit,H_2}$ discussed above, we note that it does not cool to the low temperatures required for HD-cooling to become important ($T \sim 150$K) even when the intensity is well below this threshold (indeed, even in the absence of any LW background). Thus, again, ${\rm H_2}$ abundance plays the primary role in regulating HD-cooling, though in this case the result is simpler: HD–cooling never becomes important because ${\rm H_2}$–cooling is never strong enough for the gas temperature to fall below $\approx 200$K.
Finally, in this case HI shielding serves to increase the threshold LW flux by $\sim 40\%$ above that which ${\rm H_2}$ could otherwise withstand. By contrast, $J_{\rm crit,HD}$ found in the two preceding sections is not sensitive to the effect of ${\rm H_2}$ shielding by HI, in spite of its strong dependence on the ${\rm H_2}$ abundance. This is because, in the models of $T_{\rm vir} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{4}$K halos, the HI column density does not reach the critical value above which it effectively shields ${\rm H_2}$ ($N_{\rm HI} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{23}~{\rm cm^{-2}}$) until very late in the collapse (i.e., once the particle density has reached $n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{5.5}~{\rm cm^{-3}}$). However, sufficiently high column densities of HI do build up earlier in the high-$J_{21}$ runs of $T_{\rm vir} < 10^4$K halos, resulting in the modest increase in $J_{\rm crit,H_2}$ as quoted above.
The results for the halo illuminated by a T4 spectrum are omitted here because they do not significantly differ from those described above for the T5 case, except that ${\rm H_2}$-cooling is disabled by a much lower LW flux, as shown already by O01.
![Thermal evolution of gas collapsing in a halo exposed to LW radiation (with a T5 spectrum) in Figure \[fig:collapse\], except that the initial gas temperature is assumed to be much lower, 21K. This is relevant to minihalos ($T_{\rm vir} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$<$}}}10^4$K), or to the case when there are no strong shocks able to collisionally excite or ionize the gas.[]{data-label="fig:minihalo"}](fig9.eps){height="3.45in" width="3.3in"}
Discussion {#sec:discussion}
==========
The most significant result found in this paper is that a UV flux can prevent HD–cooling from lowering the gas temperature to near that of the CMB. In particular the threshold value in collapsing halos, $J_{\rm crit,HD} \sim 4
\times 10^{-1}$, is approximately five orders of magnitude lower than the critical flux required to completely suppress ${\rm H_2}$–cooling in the same halos. As explained in § \[subsubsec:collapse\], this large difference arises because even for $J_{21}$ well below $J_{\rm crit,H_2}$, the minimum temperature to which the gas can cool via ${\rm H_2}$ is significantly elevated, so that HD formation and cooling is not activated.
[*What are the cosmological implications if a LW background prevents halos from cooling via HD line emission?*]{} The critical flux we find should be compared to the level of the LW background $J_{\rm bg}$ expected at the redshifts we consider. For reference, an estimate of the background as a function of redshift is provided by requiring the number of UV photons produced by stars to be sufficient to reionize the IGM. Such an estimate gives [e.g. @BL03] $$J_{\rm bg} \approx 4 \left(\frac{N_{\gamma}}{10} \right) \left(
\frac{f_{\rm esc}}{100\%} \right)^{-1} \left( \frac{1+{\rm z}}{11}
\right)^3.$$ Here the number of UV photons required to ionize hydrogen is commonly taken to be $N_{\gamma} \sim 10$, and the escape fraction $f_{\rm esc}$ is the fraction of ionizing photons ($13.6{\rm eV}$) escaping from galaxies at high redshifts, which is expected to be close to unity (@WAN04; see @FS10 for a recent discussion of the relevant $f_{\rm esc}$ and for references to earlier works). From this equation, we find that the mean UV background at the time of reionization exceeds the critical flux $J_{\rm crit,HD}$ by nearly an order of magnitude. While the radiation background will inevitably have spatial fluctuations, this implies that most halos collapsing in the early IGM, prior to reionization at $z \sim 10-20$, would be exposed to a super-critical flux and thus not able to cool below $T \sim 200$K. As a result, the emergence of low–mass PopIII.2 stars (or stars comparable in mass to those formed in the low-redshift universe) would be postponed until supernovae polluted the IGM with heavy elements, and metal-line cooling subsequently enabled gas clouds to reach temperatures near $T_{\rm CMB}$.
There are several issues that could be important for the above conclusion, which we have glossed over in the discussion of our models for molecular shielding and one-zone spherical collapse. We next discuss some of these.
[*Uncertainty in the Column Density of the Collapsing Region.*]{} In our calculations, we have taken the diameter of the collapsing region to be of the order of the Jeans length. However, the effective size and column density will depend sensitively on the dynamical properties of the system, including bulk motions and internal velocity gradients in the gas, departures from spherical symmetry, etc.
In order to address this uncertainty quantitatively, we have performed two additional sets of runs for a halo with $T_{\rm vir} {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4$K (analogous to those in § \[subsubsec:collapse\]). In the first, we increase the assumed size of the collapsing region by a factor of ten (i.e. to ten times the Jeans length $\lambda_{J}$). This increases the column densities by the same factor, and accordingly, $J_{\rm crit}$ is larger by a factor of $\sim 3$ than the original result, due to more efficient self-shielding of ${\rm H_2}$ and – to a lesser extent – HD. Next, we investigated the cooling properties of a smaller collapsing core, with the assumed size reduced by a factor of ten, to $0.1\lambda_{J}$. In this case, a new effect arises. Namely, for the values of $J_{21}$ at which the gas can (just) cool to around $T \sim
200$K, we find that direct HD dissociation is the dominant factor regulating the minimum temperature ultimately reached. In particular, switching HD dissociation off by hand in these cases decreases the gas temperature by a factor of $\sim 1.5$ at high number densities ($n
{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4 \; {\rm cm^{-3}}$) for $J = 10^{-1}$. When the flux is weaker than this, artificially disabling the HD-dissociation has little effect; for these low fluxes, however, the suppression of HD–cooling is modest to begin with. As expected, the critical flux is decreased in this case because the smaller column densities leave both ${\rm H_2}$ and HD more susceptible to dissociation. In particular, we find a critical value of $J_{\rm crit}\sim 10^{-1}$.
[*The Impact of 3-D Gas Dynamics on Self-Shielding.*]{} It is important to note that a full treatment of the three- dimensional dynamics of the system and the complexities inherent in radiative transfer is needed to solve the shielding problem exactly. Our calculation is based on a model of a uniform slab of gas with no internal velocity (or temperature) gradients. This is likely to be a poor approximation for a region undergoing runaway gravitational collapse, in which high gas velocities can produce significant Doppler shifts of the LW absorption bands of ${\rm
H_2}$ and HD. In general, we expect this to [*reduce*]{} the effective column densities, and the importance of shielding, compared to our calculations. This correction can be mitigated by the broadening of absorption lines at high column densities, which leads to line widths that are much larger than the average Doppler shift. In general, however, taking into account the possibility of Doppler shifts leads to the conclusion that our self-shielding results, and in turn the values quoted for $J_{\rm crit,HD}$, are [*upper limits*]{} for both. This strengthens our argument above, namely, that most collapsing halos will see a super-critical flux.
[*Resonant Scattering of Incident LW Photons.*]{} An additional effect ignored by our self-shielding calculation is that after absorbing a LW photon, a fraction of molecules will decay directly back to the original ground state, as opposed to cascading through a series of lower energy decays as we implicitly assume. These photons are thus not eliminated, making the background flux stronger than we calculate. However, @GB01 estimate that in the case of ${\rm H_2}$, such resonant scattering constitutes only a small fraction, 4-8%, of all LW absorption events (depending on the initial level populations; in particular, on the ortho/para ratio). Hence this is a minor effect, which again makes our conclusions about the suppression of HD–cooling conservative.
[*Molecular Level Populations: Implications for Photo–dissociation Rates and the Critical Flux.*]{} As described in §\[subsec:opthin\], our fiducial self-shielding calculations for HD and ${\rm H_2}$ assume that all molecules are initially in the ro-vibrational and electronic ground states. In reality, molecules will occupy higher ro-vibrational states due to collisional excitation, and this can significantly [*increase*]{} the rates of photo–dissociation. In particular, we find that self–shielding is less effective (i.e. the shield factor is larger by a factor of a $\sim$ few) if we repeat our calculations, assuming LTE population distributions in the rotational levels ($J \neq 0$) within the ground electronic and vibrational state. (Data for the HD and ${\rm H_2}$ energy levels were taken from @ARV82 and @D84 respectively.) Other studies have also noted that populations in higher vibrational levels ($v \neq
0$) can significantly increase the rates of photo–dissociation [e.g. @GJ07 and references therein]. Thus, if the effects of collisional excitation are taken into account, $J_{\rm crit,HD/H_2}$ could be significantly lower than found above. The ro-vibrational distribution will be somewhat better approximated by the ground state model up to the critical densities ($n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^4 {\rm cm^{-3}}$ for ${\rm H_2}$ and $n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^6 {\rm cm^{-3}}$ for HD). In our case, the fate of the collapsing clouds – whether it will ultimately cool to temperatures low enough for HD–cooling to become significant – is determined in the regime of somewhat lower gas densities, below those at which equilibrium ${\rm H_2}$ populations are established, so that $J_{\rm crit}$ values are likely closer to the upper end of the range.
[*Uncertainties in the Gas Phase Chemistry.*]{} The preceding discussion has focused on various uncertainties associated with photo–dissociation rates of ${\rm H_2}$ and HD; however, the accuracy of any estimates of $J_{\rm crit,HD}$ can only be as good as the accuracy of the underlying chemical rate coefficients. Considerable attention has been dedicated to uncertainties in both ${\rm H_2}$ and HD chemistry and cooling, (e.g. @SKHS04, @GSJ06, @G07, @GA08); accordingly, we restrict the discussion here to focus on two examples of thermal rate coefficients that have recently been updated, and how their revised values affect our results.
Two reaction rates that are crucial for determining the abundance of ${\rm H_2}$, particularly in gas with a large initial ionization fraction, have been estimated to be uncertain by up to an order of magnitude [e.g. @GA08]. These are the associative detachment channel for ${\rm H_2}$ formation, and the mutual neutralization of hydride and protons (reactions 10 and 13 respectively in the compilation by SBH10): $${\rm H + H^- \rightarrow H_2 + e^-}
\label{eq:mutualneutralization}$$ $${\rm H^+ + H^- \rightarrow H + H}.
\label{eq:AD}$$ Both have been revisited recently and new thermal rate coefficients have been provided for reaction (10) by @Savin+10 and for reaction (13) by @Stenrup+09. As noted above, the formation of HD occurs primarily via the reaction pathway shown in equations \[eq:reaction1\]– \[eq:reaction3\], so in general, the fractional abundance of HD is proportional to that of ${\rm H_2}$, justifying the emphasis here on uncertainties associated with the formation of ${\rm H_2}$. In order to quantify how these recently updated rate coefficients impact our results, we have performed additional runs for each of the physical scenarios discussed in § \[sec:results\]. The uncertainties these introduce into estimates of $J_{\rm crit,HD}$ and the minimum temperature, $T_{\rm min}$, reached when $J_{21} = 0$, are summarized below.
We examine three published rate coefficients for the mutual neutralization reaction; the largest of these at all temperatures is that provided by [GP98]{}, hereafter $k_{\rm 13b}$. The rate used in our fiducial model ($k_{13}$), originally provided by @DL87, is smaller than the others by up to $\sim$ an order of magnitude for $T{\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^{3}$K, and by a factor of several at lower temperatures. [^8] The value of the newest thermal rate coefficient, given by @Stenrup+09, $k_{\rm 13c}$, lies between those of $k_{13}$ and $k_{\rm 13b}$ at all temperatures in the regime we study. Using $k_{\rm 13b}$, we find the critical flux, $J_{\rm crit,HD}$ is a factor of $\sim 2$ lower than was found in each of the physical scenarios we have studied (see § \[sec:results\]; note that this does not apply to the model of halos with $T_{\rm vir} < 10^4$K), and $T_{\rm min}$ is $\sim 30 \%$ greater in the spherical collapse models. This is easily explained: when a large ionization fraction exists, reaction (13) competes with reaction (10) for the common reactant, ${\rm H^-}$; thus, adopting a larger rate coefficient for reaction (13) leads to diminished abundance of, and less robust cooling by ${\rm H_2}$. Using the newest rate, $k_{\rm 13c}$, we find the critical flux is $\sim 30-40\%$ smaller than its original value for each physical scenario, and the minimum temperature is elevated $\sim 20\%$ above that found previously in the spherical collapse models. (Note that the minimum temperature reached in the fossil HII region models does not depend on which rate coefficient is used for reaction 13.)
The effect of implementing the new rate for associative detachment is less dramatic; note, however, that the uncertainty associated with this rate coefficient has been reduced thanks to the recent study by @Savin+10. Using the systematic uncertainty given by the authors, we implement the thermal rate at the $\pm~ 1\sigma$ levels, and find the following ranges for the critical flux. In the fossil HII region: $J_{\rm crit,HD} =
3.85 \pm 0.15 ~ \times~ 10^{-3} [n/10^2]$. In $T_{\rm vir} > 10^4$K halos: $J_{\rm crit,HD} = 3.7 \pm 0.2 ~\times~ 10^{-1}$ and $1.2 \pm 0.3 ~\times~ 10^{-2}$ when illuminated by T5 and T4 spectral types respectively. For a thorough discussion of how this new rate coefficient compares to previous calculations and its impact on ${\rm H_2}$ chemistry, the reader is referred to @Savin+10.
Two final notes on the chemical network are on order: first, we have not included in this study the formation of ${\rm H_2}$ via a three-body reaction, by which metal–free gas can become fully molecular at $n {\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}
\raise1pt\hbox{$>$}}}10^8~{\rm cm^{-3}}$, because our models do not include these high-density regimes. Lastly, as mentioned above, the primordial value of D/H varies in the literature by a factor of $\sim 2$, and this in itself may have consequences for the role of HD–cooling in metal–free gas.
[*Fragmentation and Characteristic Protostellar Masses.*]{} Finally, we emphasize that all our conclusions in this paper are based on the thermal history of a gas cloud, and how this is affected by a UV flux. In order to make realistic predictions for the fragmentation, and the ultimate sizes of stars forming in a collapsing halo, fully 3-D hydrodynamical simulations would be required. While robust HD–line cooling (or lack thereof) could have a notable impact on the characteristic stellar masses in the earliest dwarf galaxies, the process of fragmentation is yet to be fully understood, and thus physical ingredients such as the minimum gas temperature may not directly translate into the actual mass of the protostar that ultimately forms (see, e.g., @Clark+10 for recent results on fragmentation in metal–free gas, and in particular the importance of turbulence).
Summary {#sec:conclusions}
=======
We have demonstrated that HD–cooling in primordial gas can be suppressed by a relatively weak external LW background, with an intensity on the order of $J_{21} \sim 10^{-3}(n/10^{-2}~{\rm
cm^{-3}})$ in constant-density “fossil ionized” gas, and $J_{21}\sim
10^{-1}$ in shock– or photo–ionized gas collapsing into halos with virial temperatures greater than $\sim 10^4$K. These critical intensities are lower than the expected mean UV background at $z\sim 10-20$, suggesting that HD-cooling is likely unimportant in most proto-galaxies forming near and just prior to the epoch of reionization. We conclude that an “HD-mode” of star formation was not as prevalent as previously thought.
On a more technical note: we have also found that the negative feedback of the LW background is mediated via the abundance of molecular hydrogen, which is dissociated by the same radiation in its Lyman and Werner bands. Direct HD photo–dissociation is comparatively less important, although we find that in regimes of less effective self–shielding, it can regulate the minimum temperature of the gas. Finally, we have provided fitting formulae for the effects of HD and ${\rm H_2}$ self-shielding, shielding of both species by HI, and shielding of HD by ${\rm H_2}$, which we hope will be useful in other future studies.
Acknowledgments
===============
We would like to thank Volker Bromm, Greg Bryan, Simon Glover and Daniel Savin for useful discussions. This work was supported by the Polányi Program of the Hungarian National Office for Research and Technology (NKTH).
[^1]: E-mail: jemma@astro.columbia.edu; zoltan@astro.columbia.edu
[^2]: Using three–dimensional simulations, @MB08 found that HD lowers the expected Pop. III masses less dramatically, but still has an important effect.
[^3]: Adopting the terminology suggested by @BYHM09.
[^4]: Numerical expressions for these cooling processes can be found in, e.g., @HRL96.
[^5]: The similarities in the line strengths of ${\rm H_2}$ and HD, quantified by the product of the oscillator strength and dissociation fraction, can be seen in Figure \[fig:linelist\] below.
[^6]: The fractional abundance of HD relative to that of ${\rm H_2}$ could exceed the cosmological D/H ratio by a large factor, owing to chemical fractionation at low temperatures [see, e.g., @GP98]. However, it never exceeds $\sim 10^{-2}$ in our models.
[^7]: The interested reader can find a much more detailed discussion of this point for the analogous case of ${\rm
H_2}$ cooling in @OH02.
[^8]: @GSJ06 have suggested that this rate is in fact erroneously small, perhaps due to typographical errors in the source. Nonetheless, it is widely used in studies of star formation in metal–free gas.
|
---
abstract: 'There are $2^n$ possible resolutions of a smooth pseudodiagram with $n$ precrossings. If we consider piecewise-linear (PL) pseudodiagrams and resolutions that themselves are PL, certain resolutions of the pseudodiagram may not exist in $\mathbb{R}^3$. We investigate this situation and its impact on the weighted resolution set of PL pseudodiagrams as well as introduce a concept specific to PL pseudodiagrams, the forcing number. Our main result classifies the PL shadows whose weighted resolution sets differ from the weighted resolution set that would exist in the smooth case.'
author:
- |
Molly Durava[^1]\
North Central College\
madurava$@$noctrl.edu\
\
Neil R. Nicholson\
North Central College\
nrnicholson$@$noctrl.edu\
\
Jackson Ramsey[^2]\
North Central College\
jsramsey$@$noctrl.edu
title: 'Piecewise-linear pseudodiagrams[^3]'
---
Introduction
============
There have been various investigations into properties of smooth pseudoknots and their resolutions [@3; @4; @5; @6; @7], but here we wish to focus our attention on those that are piecewise-linear (PL).
A *pseudodiagram* is a knot diagram that may be missing some classical crossing information, with those crossings being called *precrossings*. If a pseudodiagram has no classical crossings, then it is called a *shadow*. An assignment of crossing information to every precrossing in a pseudodiagram is called a *resolution* of the pseudodiagram. See Fig. \[fig1\].
In general, the resolutions of a piecewise-linear pseudodiagram need not themselves be PL diagrams. However, for the purposes of this paper, we will require that they are. This insistence is natural: a PL shadow is resolved to a PL knot.
Smooth pseudodiagrams with $n$ precrossings have $2^n$ resolutions that exist in $\mathbb{R}^3$. PL pseudodiagrams may not, however.
A resolution of a PL pseudodiagram is called *realizable* if it exists in $\mathbb{R}^3$ and *nonrealizable* if it does not.
Figure \[5star\] is one example of a shadow and a nonrealizable resolution of it [@9]. Theorem \[badshadowtheorem\] classifies the other shadows that have nonrealizable resolutions.
The remainder of this paper is an investigation into weighted resolution sets and forcing numbers of PL shadows. Weighted resolutions sets are an extension of the definition first appearing in [@4] but require exploration due to nonrealizable resolutions. Next, we further explore the notion of realizability by introducing the forcing number for a diagram. We conclude with possible directions for future work.
Weighted Resolution Sets and Forcing Number {#WeReSets}
===========================================
The notion of a weighted resolution set for a pseudodiagram was introduced by Henrich et al [@4]. Because some resolutions of PL pseudodiagrams may be nonrealizable, we must adjust their definition.
The *weighted resolution set* (WeRe-set) of a PL pseudodiagram $D$ is the set of all ordered pairs $(K,p_K)$ and $(\emptyset, p_{\emptyset})$, where $K$ is a realizable resolution of $D$ and $p_K$ is the probability that $K$ is obtained from $D$ by randomly assigning crossing information to every precrossing in $D$ (with either assignment of crossing information to a precrossing being equally likely) and $p_{\emptyset}$ is the probability that the resolution is not realizable.
A quick sketch of the 32 resolutions of Fig. \[5star\](a) shows that the shadow has WeRe-set $\{(0_1,\frac{20}{32}),(\emptyset,\frac{12}{32})\}$. In the smooth case, the WeRe-set would be $\{(0_1,\frac{20}{32}),(3_1, \frac{10}{32}),(5_1, \frac{2}{32}) \}$ [@4]. Besides the difference of PL resolutions being nonrealizable, note that in the smooth case the knot $5_1$ occurs as a resolution. Because a nontrivial PL knot requires at least six edges [@8; @10], we know that the shadow of Fig. \[5star\] cannot be resolved to a PL diagram of $5_1$.
For reference, we calculate here the WeRe-set for each of the shadows appearing in Fig. \[wereshadows\]. Figure \[wereshadows\](a) has WeRe-set $\{(0_1,\frac{6}{8}),(\emptyset,\frac{2}{8})\}$, while Fig. \[wereshadows\](b) has WeRe-set $\{(0_1,\frac{20}{32}),(3_1,\frac{2}{32}),(\emptyset,\frac{10}{32}) \}$ and Fig. \[wereshadows\](c) has WeRe-set $\{(0_1,\frac{20}{32}),(3_1,\frac{10}{32}),(\emptyset,\frac{2}{32}) \}$. We note that all of these particular shadows have nonrealizable resolutions and this leads to a natural question: can we classify the shadows with such a property? That is, which shadows, when considered in the PL sense, have a WeRe-set that differs if we were to consider the shadow in the smooth sense? Lemma \[badshadowlemma\] provides four such cases.
\[badshadowlemma\] If $S$ is a PL shadow that has a portion of it isotopic to one of the diagrams in Fig. \[badshadows\] (or a mirror image of such a diagram), then $S$ has nonrealizable resolutions, and hence, the WeRe-set for $S$ differs from the WeRe-set when $S$ is considered to be a shadow of a smooth knot.
Observe that the resolutions of the PL shadows of Fig. \[badshadows\] that appear in Fig. \[badshadowresolutions\], respectively, are not possible in $\mathbb{R}^3$.\
Are there other shadows with nonrealizable resolutions? Note that a resolution $R$ of a shadow $S$ is nonrealizable if an edge of $R$ is forced to “bend;" that is, there exists a plane that the edge crosses yet the edge has both of its endpoints on the same side of the plane. The following theorem, our main result, proves that Lem. \[badshadowlemma\] is a complete categorization of such shadows. We will be using the following notation. A PL shadow $S$ consists of $n$ distinct points\
$v_1$, $v_2$, ..., $v_n$, where $v_i = (x_i,y_i,0)$, in the plane and $n$ linear segments, called edges, $e_i = v_{i-1}v_{i}$ (considering the $v_i$ cyclically), so that any two edges that intersect must do so transversally. A resolution of $S$ is an assignment to each $v_i$ a point in $\mathbb{R}^3$, $\overline{v_i} = (x_i,y_i,z_i)$, so that no two resolved edges $\overline{e_i} = \overline{v_{i-1}} \hspace{2pt} \overline{v_{i}}$ intersect except at their endpoints.
\[badshadowtheorem\] Any shadow that has nonrealizable resolutions must have a portion of it isotopic to one of the figures in Fig. \[badshadows\].
If $S$ is a shadow with resolution $R$, using the above notation, then there are two types of planes to consider: those formed by two adjacent edges of $R$ and those not containing two adjacent edges of $R$. If $e_i$ and $e_{i+1}$ are two adjacent edges of $S$, then $\overline{e_i}$ and $\overline{e_{i+1}}$ will always create a plane in $\mathbb{R}^3$, no matter if the vertices of $R$ are translated or not. Thus, we must determine what resolutions are nonrealizable with regards to these planes. That is, starting with two adjacent edges of $S$, what other arrangements of edges of $S$ could lead to potentially nonrealizable resolutions? This has been done [@9], yielding the four cases of Fig. \[badshadows\].
Now suppose a plane $P$ does not contain adjacent edges of $R$. Let us assume $P$ does contain an edge $\overline{e_i} = \overline{v_{i-1}} \overline{v_i}$ of $R$ and a third point $p$, not on $\overline{e_{i-1}}$ or $\overline{e_{i+1}}$ of $R$ (lest $P$ contains two adjacent edges of $R$). $P$ can be projected to the $xy$-plane. If $e_j$ is an edge of $S$ intersecting this region, then we can guarantee $\overline{v_{j-1}}$ and $\overline{v_j}$ lie on opposite sides of $P$, since not both $\overline{v_{j-1}}$ and $\overline{v_j}$ are endpoints of $\overline{e_i}$ or the edge $p$ lies on. The points $\overline{v_{j-1}}$ and $\overline{v_j}$ can be isotoped ($(x_j,y_j,z_j)$ of $R$ can be translated to $(x_j,y_j,z_j + \epsilon_j)$, for example, where $\epsilon_j \in \mathbb{R}$) while still preserving the knot type of $R$. See Fig. \[plane\].
The above argument holds for any plane not containing two adjacent edges of $R$, and thus, the result holds.
![Disregarding planes not formed by adjacent edges[]{data-label="plane"}](plane.eps "fig:"){width="30.00000%"} (-123,50)[$\overline{v_{i-1}}$]{} (-87,70)[$\overline{e_{i}}$]{} (-56,91)[$\overline{v_{i}}$]{} (-98,15)[$\overline{v_{j-1}}$]{} (-30,98)[$\overline{v_{j}}$]{} (-39,71)[$\overline{e_{j}}$]{} (-30,30)[$p$]{} (-75,55)[$P$]{}
We immediately see that all resolutions of one category of shadows are realizable.
The shadow of the PL $(n,2)$-torus knot, for $n$ odd, $n \geq 7$ (as pictured in Fig. \[torus\]) has $2^n$ realizable resolutions.
For such $n$, these shadows contain no portion of their diagrams isotopic to those in Fig. \[badshadows\].
If one starts with the shadow of Fig. \[5star\](a) and begins choosing resolutions for the precrossings, with the goal of creating a realizable resolution, then there may come a point when there is not a choice of resolution for a particular crossing. It may happen that, in order to realize the resolution, the precrossing is forced to be assigned one particular type of crossing. This idea introduces the following two definitions.
Let $D$ be a PL pseudodiagram with $P$ the set of precrossings of $D$. Then, $S \subset P$ is said to *force* $D$ if there exists an assignment of crossing information to $s \in S$ so that all crossings of $P - S$ must be resolved one particular way in order to realize the resolution of $D$ in $\mathbb{R}^3$.
If $D$ is a PL pseudodiagram, then the *forcing number of D*, $f(D)$, is the size of the smallest set of precrossings of $D$ that forces $D$.
\[starlemma\] If $D$ is the shadow of Fig. \[5star\], then $f(D) = 2$.
It is clear that no pseudodiagram has forcing number $1$, so it suffices to find a set of two precrossings of $D$ that forces $D$. Choose the resolutions of the two crossings as pictured in Fig. \[5starforce\](a). By TLem. \[badshadowlemma\], two precrossings are forced to be resolved a certain way, as in Fig. \[5starforce\](b), for if either resolution were switched, regardless of how the remaining crossings are resolved, the resolution of the shadow would be nonrealizable (see Fig. \[badshadowresolutions\](b)). Once resolved, by a similar argument, the final precrossing is forced to be resolved as in Fig. \[5starforce\](c).
What if a shadow contains multiple portions isotopic to those in Fig. \[badshadow\]?
Let $S$ be a shadow with $n$ portions of it isotopic to those appearing in Fig. \[badshadows\]. If $m$ is the maximum number of crossings of $S$ that can be forced, then
$$m \leq n.$$
Note that a precrossing in a pseudodiagram $S$ can be forced only if the other choice of resolution for it results in a nonrealizable resolution. The only situation in which this could occur is if a portion of $S$ is isotopic to one of Fig. \[badshadowresolutions\] (or other nonrealizable resolutions of Fig. \[badshadows\]) with one of the classical crossings yet still a precrossing. Then, there is only one possibility for resolving this precrossing, to yield a realizable resolution. This is true for each region of $S$ isotopic to one of Fig. \[badshadows\], proving the result.
Future Questions {#future}
================
There are numerous questions that these concepts naturally lead to. In particular, a few of them are as follows. Are there other relationships between smooth and piecewise-linear pseudodiagrams? Do patterns emerge in WeRe-sets, much like those found in the smooth case [@4]? Are there deeper relationships between the forcing number and piecewise-linear virtual knots? In [@2], the topology of $n$-sided polygons embedded in $\mathbb{R}^3$, for small values of $n$, is explored. Understanding any connections between those spaces and forcing number may lead to a better understanding of the space’s topology for higher values of $n$. Lastly, the concept of forcing number may potentially lead to stronger bounds on the edge index [@1; @8] of PL knots, a fundamental question in PL knot theory.
[0]{} L. Bennett, Edge index and arc index of knots and links, Thesis (Ph.D.), The University of Iowa (2008).
J. A. Calvo, Geometric knot spaces and polygonal isotopy. Knots in Hellas ‘98, Vol. 2 (Delphi), *J. Knot Theory Ramifications*. **10** (2001) 245–267.
R. Hanaki, Pseudo diagrams of knots, links, and spatial graphs, *Osaka J. Math.*. **47** (2010) 863–883.
A. Henrich, R. Hoberg, S. Jablan, L. Johnson, E. Minten, and L. Radović, The theory of pseudoknots, *J. Knot Theory Ramifications*. To appear.
D. Liu, S. Mackey, N. Nicholson, T. Schroeder, and K. Thomas, Average bridge number of shadow resolutions, submitted.
A. Henrich and S. Jablan, On the coloring of pseudoknots, available at http://arxiv.org/pdf/1305.6596.pdf, July 31, 2013.
A. Henrich, N. MacNaughton, S. Narayan, O. Pechenik, and J. Townsend, Classical and virtual pseudodiagram theory and new bounds on unknotting numbers and genus, *J. Knot Theory Ramifications*. **20** (2011) 625–650.
M. Meissen, Edge number results for piecewise-linear knots. *Knot theory (Warsaw, 1995)*, Banach Center Publ., **42**, *Polish Acad. Sci, Warsaw* (1998), 235–242.
N. Nicholson, Piecewise-linear vitual knots, *J. Knot Theory Ramifications*. **20** (2011) 1271–1284.
R. Randell, Invariants of piecewise-linear knots, *Knot Theory* (Warsaw, 1995), **42**, Banach Center Publ., Polish Acad. Sci., 307–319.
[^1]: North Central College undergraduate
[^2]: North Central College undergraduate Lederman Scholar
[^3]: Keywords: shadow, pseudoknot, weighted resolution set, piecewise linear
|
---
abstract: 'Magnetic field measurements in the upper chromosphere and above, where the gas-to-magnetic pressure ratio $\beta$ is lower than unity, are essential for understanding the thermal structure and dynamical activity of the solar atmosphere. Recent developments in the theory and numerical modeling of polarization in spectral lines have suggested that information on the magnetic field of the chromosphere-corona transition region could be obtained by measuring the linear polarization of the solar disk radiation at the core of the hydrogen Lyman-$\alpha$ line at 121.6 nm, which is produced by scattering processes and the Hanle effect. The Chromospheric Lyman-$\alpha$ Spectropolarimeter (CLASP) sounding rocket experiment aims to measure the intensity (Stokes $I$) and the linear polarization profiles ($Q/I$ and $U/I$) of the hydrogen Lyman-$\alpha$ line. In this paper we clarify the information that the Hanle effect can provide by applying a Stokes inversion technique based on a database search. The database contains all theoretical $Q/I$ and $U/I$ profiles calculated in a one-dimensional semi-empirical model of the solar atmosphere for all possible values of the strength, inclination, and azimuth of the magnetic field vector, though this atmospheric region is highly inhomogeneous and dynamic. We focus on understanding the sensitivity of the inversion results to the noise and spectral resolution of the synthetic observations as well as the ambiguities and limitation inherent to the Hanle effect when only the hydrogen Lyman-$\alpha$ is used. We conclude that spectropolarimetric observations with CLASP can indeed be a suitable diagnostic tool for probing the magnetism of the transition region, especially when complemented with information on the magnetic field azimuth that can be obtained from other instruments.'
author:
- 'R. Ishikawa, A. Asensio Ramos, L. Belluzzi, R. Manso Sainz, J. [Š]{}t[ě]{}p[á]{}n, J. Trujillo Bueno, M. Goto, and S. Tsuneta'
title: 'On the inversion of the scattering polarization and the Hanle effect signals in the hydrogen Lyman-$\alpha$ line'
---
Introduction
============
The chromosphere and the transition region of the Sun lie between the cooler photosphere, where the ratio of gas to magnetic pressure $\beta>1$, and the $10^6$ K corona, where $\beta<1$. It is believed that in this interface region the magnetic forces start to dominate over the hydrodynamic forces, and that local energy dissipation and energy transport to the upper layers via various fundamental plasma processes are taking place. Recent observations [e.g., @Shibata2007; @Katsukawa2007; @DePontieu2007; @Okamoto2007; @Okamoto2011; @Vecchio2009] have revealed ubiquitous dynamical chromospheric activities such as jets, Alfvénic waves, and shocks, which are thought to play a key role in the heating of the chromosphere and corona and in the acceleration of the solar wind. However, we do not have any significant empirical knowledge on the strength and direction of the magnetic field in the upper solar chromosphere and transition region.
The information on the magnetic field of the solar atmosphere is encoded in the polarization that some physical mechanisms introduce in the spectral lines. The familiar Zeeman effect can introduce polarization in the spectral lines that originate in the upper solar chromosphere and the transition region. However, because such lines are broad and the magnetic field there is expected to be rather weak, the induced polarization amplitudes will be very small (except perhaps in sunspots), and the Zeeman effect has limited applicability. Fortunately, the Hanle effect [the magnetic-field-induced modification of the linear polarization caused by scattering processes in a spectral line, @Casini2008] in some of the allowed UV lines that originate in the upper chromosphere and transition region is expected to be a more suitable diagnostic tool [@Trujillo2011; @Trujillo2012; @Belluzzi2012].
The hydrogen Lyman-$\alpha$ line ($\lambda=121.567$ nm) is particularly suitable because (1) the line-core polarization originates at the base of the solar transition region, where ${\beta}{\ll}1$ [@Trujillo2011; @Belluzzi2012b; @Stepan2012], (2) collisional depolarization plays a rather insignificant role [e.g., @Stepan2011], and (3) via the Hanle effect the scattering polarization is sensitive to the magnetic fields expected for the upper chromosphere and transition region [@Trujillo2011].
The Chromospheric Lyman-Alpha Spectropolarimeter (CLASP) is a sounding rocket experiment developed by researchers from Japan, USA, and Europe [@Ishikawa2011; @Narukage2011; @Kano2012; @Kobayashi2012], which is expected to fly in 2015. The first, very important goal of this sounding rocket experiment is the measurement of the linear polarization signals produced by scattering processes in the Lyman-$\alpha$ line. The second goal is the detection of the Hanle effect action on the core of $Q/I$ and $U/I$ in order to constrain the magnetic field of the transition region from the observed Stokes profiles. CLASP will measure the linear polarization profiles of the Lyman-$\alpha$ line within a spectral window of at least $\pm0.05$ nm around the line center, where in addition to the line core polarization itself (where the Hanle effect operates), we expect the largest linear polarization signals produced by the joint action of partial frequency redistribution and $J-$state interference effects [@Belluzzi2012b]. Polarization sensitivities of 0.1% and 0.5% are required in the line core (i.e., $\pm0.02$ nm around the line center) and in the line wings (at $>\pm0.05$ nm), respectively. In order to achieve these polarization sensitivities, the $400\arcsec$ spectrograph slit will be fixed at the selected observing target during the CLASP observation time of $<5$ min. Furthermore, after the data recovery, we will add consecutive measurements and perform spatial averaging.
In this paper, we clarify the information we expect to determine with the CLASP experiment, providing a strategy suitable for highlighting the ambiguities of the Hanle effect and the complexity of the ensuing inference problem. To this end, we have used a plane-parallel (one-dimensional) semi-empirical model of the solar atmosphere, and we have created a database of theoretical Stokes $Q/I$ and $U/I$ profiles for all possible strength, inclination, and azimuth values of the magnetic field vector. Then, we investigate the possibility of recovering the magnetic field information using the characteristics of CLASP (noise level, spectral resolution, etc.) and the ambiguities and limitation inherent to the Hanle effect. We also discuss the most suitable observing targets and data analysis strategy for constraining the magnetic field information.
The ambiguities inherent to magnetic field diagnostics can be reduced by exploiting the joint action of the Hanle and Zeeman effect, where both linear and circular polarization signals are used to constrain the magnetic field vector [@Landi1993; @Landi1982; @Trujillo2002; @Lopez2003 see also @Asensio2008 [@Centeno2010; @Anusha2011]]. Unfortunately, while the Lyman-$\alpha$ line is most advantageous to explore the magnetism of the various regions in the transition region (where $\beta\ll1$), it is challenging to measure the contribution of the Zeeman effect to the circular polarization in UV lines. Here we propose alternative ways to alleviate this issue in the subsequent sections.
We assume that the quiet solar atmosphere can be represented by a plane-parallel semi-empirical model atmosphere, even though the upper solar chromosphere is a highly inhomogeneous and dynamic physical system, much more complex than the idealization of the one-dimensional static semi-empirical model used here. Such inhomogeneity and dynamics causes larger amplitudes and spatial variations in the scattering polarization signals [@Stepan2012; @Stepan2014]. However, it is of interest to note that @Trujillo2011 showed that the amplitude and shape of the $Q/I$ profiles calculated in the quiet-Sun plane-parallel semi-empirical model of @Fontenla1993 are qualitatively similar to the temporally-averaged profiles obtained from the Stokes $I$ and $Q$ signals computed at each time step of the chromospheric hydro-dynamical model of @Carlsson1997. Moreover, spatial and temporal averaging of the scattering polarization calculated in three-dimensional (3D) atmospheric models tend to produce $Q/I$ Lyman-$\alpha$ signals more or less similar to those calculated in plane-parallel semi-empirical model atmospheres [@Stepan2014]. In the case of the CLASP experiment, which will require both spatial ($\sim10\arcsec$) and temporal averaging ($\sim$5 min) to attain the necessary signal-to-noise ratio, the model atmosphere for a first approximate interpretation of the CLASP data could be a plane-parallel semi-empirical model. We believe that the post-launch data analysis will pave the way for further improvements, for example via forward modeling calculations of the Lyman-$\alpha$ scattering polarization signals using increasingly realistic 3D models of the solar chromosphere.
Scattering polarization and the Hanle effect
============================================
The expected scattering polarization in the core of the hydrogen Lyman-$\alpha$ line physically originates from population imbalances and quantum coherence between the magnetic sublevels pertaining to the $2p$ ${^2{\rm P}}_{3/2}$ upper level, both of which are produced from the absorption of anisotropic radiation by the hydrogen atoms of the solar transition region.
Typically, in weakly magnetized stellar atmospheres, the absorption of anisotropic radiation produces atomic level alignment (i.e., population imbalances between magnetic sublevels having different $|M|$ values, $M$ being the magnetic quantum number). On the other hand, atomic level orientation (i.e., population imbalances between sublevels with $M > 0$ and $M < 0$) can be neglected in the modeling of the Lyman-$\alpha$ linear polarization. The anisotropy and symmetry properties of the radiation field that illuminates the atomic system can be conveniently quantified through the so-called radiation field tensor $J^K_Q$ [see Eq. (5.157) of @Landi2004]. If the radiation field has cylindrical symmetry along a given direction, and it has no circular polarization, then, if the quantization axis is taken along the symmetry axis, only the components $J^0_0$ and $J^2_0$ are non-zero. The former represents the mean intensity of the radiation field, while the latter quantifies its degree of anisotropy. The explicit expression of the $J^2_0$ component of the frequency-integrated radiation field tensor is given by
$$J^2_0 = \int {\rm d}x \oint \frac{{\rm d} \Omega}{4 \pi}
\frac{\phi_x}{2 \sqrt{2}} \left[ (3 \mu^2 - 1) I_{x \Omega} +
3(1 - \mu^2) Q_{x \Omega} \right] \; ,$$
where $\phi_x$ is the absorption profile, $x$ is the normalized frequency distance from line center, and $\mu = \cos\theta$ (with $\theta$ the angle between the radiation beam under consideration and the quantization axis). In the solar atmosphere, the contribution of the Stokes $Q_{x \Omega}$ parameter to $J^2_0$ is very small compared with that caused by the specific intensity $I_{x \Omega}$. Thus, in analogy with the spherical harmonics $Y^0_2$, the $J^2_0$ tensor represents whether the local illumination of the atomic system is dominated by predominantly vertical ($J^2_0 > 0$) or predominantly horizontal ($J^2_0 < 0$) illumination. In the case of $J^2_0 > 0$ ($J^2_0 < 0$), photon absorption processes take place predominantly through $\Delta M = \pm 1$ ($\Delta M = 0$) transitions, which gives rise to population imbalances among the various magnetic sublevels.
In this work, we consider a plane-parallel atmosphere, whose parameters depend only on the height. Taking the quantization axis along the vertical, it can be shown that in the absence of magnetic fields, or in the presence of a vertical magnetic field, any coherence between pairs of magnetic sublevels of the $2p$ ${^2{\rm P}}_{3/2}$ upper level is zero, while it is non-zero if an inclined magnetic field is present.
Although choosing the quantization axis along the local vertical can be advantageous in some cases, in order to investigate the impact of the magnetic field on the atomic polarization (i.e., the Hanle effect), it is convenient to choose it along the magnetic field direction. In this case, describing atomic polarization through the multipole moments of the density matrix, $\rho^K_Q$ [e.g., Sect. 3.7 of @Landi2004], the effect of the magnetic field is described by the equation [see Sect. 10.3 of @Landi2004] $$\rho^K_Q(J_u) = \frac{1}{1 + {\rm i} Q \Gamma_u}
\left[ \rho^K_Q(J_u) \right]_{B=0} \; ,$$ where $\Gamma_u = 8.79 \times 10^6 \, B \, g_{J_u} / A_{u \ell}$ ($B$ is the field strength in gauss, $g_{J_u}$ is the Landé factor, and $A_{u \ell}$ is the Einstein coefficient for the spontaneous emission process in s$^{-1}$), and $[\rho^K_Q(J_u)]_{B=0}$ represent the value of the $\rho^K_Q(J_u)$ elements for the non-magnetic case. This equation shows that in the magnetic field reference frame the population imbalances represented by the $\rho^K_Q(J_u)$ elements with $Q = 0$ are unaffected by the magnetic field, while the elements with $Q \ne 0$, indicating the atomic coherence, are reduced and dephased with respect to the non-magnetic case. The Hanle effect can be thus be defined as the modification of the atomic-level polarization (in particular the modification of coherence) and the ensuing observables effects on the emergent Stokes $Q$ and $U$ profiles, caused by the action of an inclined magnetic field. For the hydrogen Lyman-$\alpha$ line, the magnetic field strength for which $\Gamma_u = 1$ (i.e., the critical magnetic field for the onset of the Hanle effect) is $B = 53$ G.
Database
========
The Lyman-$\alpha$ line consists of two blended transitions between the $1p$ $^2{\rm S}_{1/2}$ lower level and the $2p$ $^2{\rm P}_{1/2}$ and $2p$ $^2{\rm P}_{3/2}$ upper levels. To estimate the linear polarization of the Lyman-$\alpha$ line, we follow the approach of @Trujillo2011, and we provide a quick overview here (refer to @Trujillo2011 for details). We consider the quiet-Sun semi-empirical model of @Fontenla1993, which is hereafter referred to as the FAL-C model. Thus, our model atmosphere is plane-parallel, and the physical quantities only depend on the coordinate $Z$. The hydrogen atomic model we have used includes the fine structure of the first two $n-$levels of the hydrogen, where $n$ is the principal quantum number. The excitation state of each level is quantified by means of the multipolar components of the atomic density matrix, which are self-consistently obtained by solving the statistical equilibrium equations and the Stokes vector radiative transfer equation [see chapter 7 of @Landi2004]. Thus, we assume complete frequency re-distribution (CRD), which is a suitable approximation for the estimation of the scattering polarization at the Lyman-$\alpha$ line center [@Belluzzi2012b]. Isotropic collisions with protons and electrons are also taken into account, but these collisions have a negligible depolarizing effect on the scattering polarization of the hydrogen Lyman-$\alpha$ line at the low plasma densities of the upper chromosphere and the transition region, as discussed by @Stepan2011.
By solving the above-mentioned non-local thermodynamic equilibrium (non-LTE) radiative transfer problem, we created a database of synthetic Stokes profiles ($I(\lambda)$, $Q(\lambda)$, and $U(\lambda)$) of the hydrogen Lyman-$\alpha$ line. We consider the presence of a deterministic magnetic field of arbitrary strength $B$, inclination $\theta_B$, and azimuth $\chi_B$, all of which are assumed to be constant with height (Fig.\[fig:geometry\]). In total, there are 137751 sets of $Q(\lambda)/I(\lambda)$ and $U(\lambda)/I(\lambda)$ profiles for different magnetic field strengths ($0\le B\le 250~\mathrm{G}$ in $5~\mathrm{G}$ increments), inclinations ($0^{\circ}\le \theta_B \le 180^{\circ}$ in $5^{\circ}$ increments), and azimuths ($0^{\circ}\le \chi_B \le 360^{\circ}$ in $5^{\circ}$ increments). An example of a synthetic profile is given in Figure \[fig:profile\] (solid lines). For all magnetic parameters, the Stokes $I(\lambda)$ profiles are virtually identical because only the polarization signals have a measurable sensitivity to the magnetic field. Here, we consider two scattering geometries: disk center ($\mu=1.0$) and close-to-the-limb ($\mu=0.3$).
Here, we also simulate the CLASP observations. The wavelength resolution of the CLASP optics is $\sim0.013$ nm, and the Stokes spectra are recorded with a wavelength sampling of 0.005 nm pixel$^{-1}$. For a spatial area of less than 10, the polarization sensitivity is 0.1% with respect to the intensity at each wavelength pixel in the line core ($121.567\pm0.02$ nm). First, the synthetic profiles $I(\lambda),
Q(\lambda)$, and $U(\lambda)$ in the database are convolved with a 0.013 nm FWHM Gaussian and are then sampled with a wavelength step of 0.005 nm. Assuming that the polarization sensitivity will be achieved at a $3\sigma$ level, random noise with a standard deviation of $\sigma=0.033\%$ with respect to the intensity at each wavelength bin is added to the convolved $I$, $Q$, and $U$ profiles. Finally, we derive the simulated $Q^{obs}(\lambda)/I^{obs}(\lambda)$ and $U^{obs}(\lambda)/I^{obs}(\lambda)$ profiles (squares in Fig.\[fig:profile\]).
Hanle diagrams {#hanle_diagram}
==============
The behavior of the $Q/I$ and $U/I$ signals with respect to the magnetic field parameters (strength, inclination, and azimuth of the magnetic field vector) can be suitably illustrated by a Hanle diagram [see also @Trujillo2012]. In Figure \[fig:diagram1\], we show Hanle diagrams with a horizontal magnetic field (i.e., the inclination is fixed at $\theta_B=90^{\circ}$) for different values of the azimuth angle when the field strength varies between 0 G (black circles) and 250 G (white circles). The Figure \[fig:diagram1\] (a) corresponds to a close-to-limb geometry ($\mu=0.3$), while Figure \[fig:diagram1\] (b) refers to the forward scattering case of a disk center observation ($\mu=1.0$). The $Q/I$ and $U/I$ signals in this figure refer to the amplitudes of original synthetic profiles $Q(\lambda_0)/I(\lambda_0)$ and $U(\lambda_0)/I(\lambda_0)$ in the database at the line center, where $\lambda_0=121.567$ nm. Figure \[fig:diagram2\] is similar to Figure \[fig:diagram1\], but the magnetic field is nearly vertical (i.e., the inclination is fixed at $\theta_B=20^{\circ}$).
For the close-to-limb geometry ($\mu=0.3$), the unmagnetized case ($B=0$ G) shown in black circles in Figs. \[fig:diagram1\] (a) and \[fig:diagram2\] (a) yields negative Stokes $Q/I$ values, which indicates that the direction of linear polarization caused by the anisotropic radiation field (i.e., the non-magnetic scattering polarization) is perpendicular to the solar limb. As shown by @Trujillo2011, the anisotropy of the radiation field, $J^2_0$, illuminating the hydrogen atoms in the Lyman-$\alpha$ line is negative (dominated by horizontal illumination) through the line formation region of the FAL-C model atmosphere. The resulting scattering polarization is perpendicular to the limb. At the disk center ($\mu=1.0$), where the line of sight (LOS) is parallel to the solar normal (i.e., the symmetry axis of the radiation field), we have $Q/I=0$ and $U/I=0$ for the non-magnetized case (black circles in Figs. \[fig:diagram1\] (b) and \[fig:diagram2\] (b)).
The crossing points seen in the $\mu=0.3$ Hanle diagrams indicate ambiguity, which occurs when different magnetic field vectors give the same Stokes $Q/I$ and $U/I$ signals. For example, in Figure \[fig:diagram1\] (a), the crossing point at $Q/I\sim-0.3\%$ and $U/I\sim-0.15\%$ corresponds to cases with $\chi_B=330^{\circ}$ and $B=10$ G and with $\chi_B=60^{\circ}$ and $B=30$ G. This ambiguity cannot be solved without using additional information to constrain one of magnetic parameters. At the solar disk center ($\mu=1.0$), we have $180^{\circ}$ ambiguity, where two magnetic field vectors whose azimuths differ by $180^{\circ}$ represent the same $Q/I$ and $U/I$ signals (see overlapping solid and dashed lines in Figs. \[fig:diagram1\] (b) and \[fig:diagram2\] (b)).
In such Hanle diagrams, the change in linear polarization from $0$ G to $50$ G is larger than that from $50$ G to $250$ G, with the exception of nearly vertical fields in the $\mu=1.0$ forward scattering geometry case (Figure \[fig:diagram2\] (b)). This trend is prominent for the horizontal field case shown in Figure \[fig:diagram1\], where $Q/I$ and $U/I$ significantly change from 0 to 50 G but show little change from 50 G to 250 G. This indicates that the Hanle effect in the hydrogen Lyman-$\alpha$ line is sensitive to field strengths $B{<}50$ G. A field strength of $\sim50$ G corresponds to the critical field strength for the onset of the Hanle effect in the Lyman-$\alpha$ line. Above this field strength, the Lyman-$\alpha$ line approaches the Hanle saturation limit, where the linear polarization is insensitive to the magnetic field strength.
The linear polarization signals produced by the Hanle effect are weaker than 0.1% for a nearly vertical field at the solar disk center ($\mu=1.0$) (Figure \[fig:diagram2\] (b)) because weak vertical fields do not give rise to a strong symmetry breaking. At the solar disk center, largely inclined and strong fields are more suitable observing targets.
Inversion
=========
General approach for solving the inversion problem {#chidef}
--------------------------------------------------
To clarify whether we can retrieve information on the magnetic field from the observed Stokes profiles, we perform a process that mimics the Stokes inversion. Here, we assume that the formation of the Lyman-$\alpha$ line is modeled with a semi-empirical FAL-C model atmosphere. For this purpose, we introduce the following function: $$\chi^{2}(B,\chi_B,\theta_B)\equiv\sum_{k=1}^{2}\sum_{l=1}^{n}
\frac{[S_{k}^{obs}(\lambda_{l})-S_{k}^{mod}(\lambda_{l},B,\chi_B,\theta_B)]^{2}}{\sigma^2},$$ where $S_{1}$ and $S_{2}$ indicate $Q/I$ and $U/I$, respectively. The simulated CLASP observation is $S_{k}^{obs}$ ($k=1,2$), i.e., the synthetic Stokes profile taken from the database, convolved with a 0.013 nm FWHM Gaussian, sampled with a wavelength step of 0.005 nm, with added noise (Section \[database\]). Here, $S_{k}^{mod}$ represents the noiseless model profile corresponding to the synthetic profile convolved with a 0.013 nm FWHM Gaussian and sampled with a wavelength step of 0.005 nm. In the database, the parameter increments in the synthetic profiles are 5 G increments of field strength and 5$^{\circ}$ increments of azimuth and inclination. In order to have better accuracy in the inversion, $S_{k}^{mod}$ are calculated with $1^{\circ}$ steps for azimuth and inclination and with $1$ G steps for field strength by linearly interpolating model profiles with adjacent magnetic parameters. We take into account the wavelength range of $121.567~\mathrm{nm}\pm0.04~\mathrm{nm}$ where the linear polarization signals are defined, resulting in $n=17$ wavelength points. Finally, $\sigma$ is the standard deviation of the random noise we assumed in the CLASP observation simultations. We employ $\sigma=0.033\%$ as the baseline, and in Section \[noise\], we will discuss the influence of noise on the inversion results.
We calculate the $\chi^{2}$ function using the given observed profile ($S_{k}^{obs}$) and all of the model profiles ($S_{k}^{mod}$). We find the magnetic parameters with a statistically acceptable $\chi^{2}$, defined by $\Delta\chi^2\equiv\chi^2-\chi^2_{\mathrm{min}}\le3.53$, where $\chi^2_{\mathrm{min}}$ is the minimum $\chi^2$ [@Numerical]. Because we have three free parameters ($B$, $\chi_B$, and $\theta_B$), $\Delta\chi^2\equiv\chi^2-\chi^2_{\mathrm{min}}\le3.53$ is given by the chi-square distribution function for three degrees of freedom with a confidence level of 68.3% ($1\sigma$). The parameter region defined by this criteria indicates that there is a 68.3% ($1\sigma$) chance for the true field strength, azimuth, and inclination parameters to fall within this region. We call this procedure “inversion” throughout this paper. Furthermore, we use the term “input” to refer to the magnetic parameters of the simulated observation $S_{k}^{obs}$.
Figure \[fig:3dchi\] shows the dependence of $\chi^2$ on inclination, azimuth, and field strength for the input parameters of $B=50$ G, $\theta_B=90^{\circ}$, and $\chi_B=120^{\circ}$, which are shown by gray circles. The black region in Figure \[fig:3dchi\] is defined by $\Delta\chi^2\equiv\chi^2-\chi^2_{\mathrm{min}}\le3.53$. The model profiles located in this region fit the simultated observations reasonably well, and all magnetic parameters are statistically accepted as results of inversion with a confidence level of 68% ($1\sigma$). The model profiles corresponding to the input parameters are shown with dashed lines in Figure \[fig:invprof\]. The magnetic parameters (chosen as an example in the acceptable $\chi^2$ region) marked by gray triangle in Figure \[fig:3dchi\] correspond to the model profiles shown with solid lines in Figure \[fig:invprof\]. Within the noise level, these profiles are identical.
Properties of the $\chi^2$ map {#chi2property}
------------------------------
### Saturation regime {#saturation}
Figure \[fig:2dchi\] represents the $\chi^2$ maps projected onto two parameter spaces. There are four 1$\sigma$ confidence level regions ($\Delta\chi^2\le3.53$) extended along the field strength above 50 G at $\theta_B=20^{\circ}$, $30^{\circ}$, $150^{\circ}$, $155^{\circ}$ in Figure \[fig:2dchi\] (b) and at $\chi_B=40^{\circ}$, $145^{\circ}$, $195^{\circ}$, $310^{\circ}$ in Figure \[fig:2dchi\] (c). The four dashed circles in Figure \[fig:2dchi\] (a) show the regions on the $\theta_B$-$\chi_B$ plane where $(\theta_B, \chi_B)=\{(20^{\circ}, 145^{\circ}),
(30^{\circ}, 40^{\circ}), (150^{\circ}, 195^{\circ}),
(155^{\circ}, 310^{\circ})\}$, indicating four possible solutions of inclination and azimuth above 50 G. The elongated line, which occurs in only field strength direction, indicates a large *uncertainty* in the field strength where we cannot constrain the field strength. In other words, we do not have sensitivity to measure the magnetic field strength beyond $\sim50$ G. Multiple simply-connected spaces ($1\sigma$ confidence level regions) indicate the *ambiguity* of solutions in which completely different magnetic parameters provide the same $Q/I$ and $U/I$ profiles.
The elongated regions correspond to the saturation regime, in which the linear polarization weakly depends on the field strength. This is consistent with the Hanle diagrams (Section \[hanle\_diagram\]), where the change of linear polarization above 50 G is small compared with that below 50 G. In this saturation regime, the linear polarization signal is only dependent on the inclination and azimuth. Thus, we can determine the azimuth and inclination. However, ambiguity allows four combinations of inclination and azimuth to exist as shown in Figure \[fig:2dchi\]. The ambiguity corresponds to the Van Vleck ambiguity in the saturation regime, which is inherent to the Hanle effect. These four ambiguous solutions provide magnetic parameters with less inclined ($\theta_B\sim20-30^{\circ}$ or $\theta_B\sim150-160^{\circ}$) and relatively strong ($B>50$ G) magnetic field vectors.
### Non-saturation regime {#non-saturation}
Below 50 G, the linear polarization of $Q/I$ and $U/I$ depends on the field strength as well as on the inclination and azimuth (Section \[hanle\_diagram\]); this is the non-saturation regime. As shown in Figure \[fig:2dchi\] (a), in the non-saturation regime, there are two isolated $1\sigma$ confidence level regions with saturation regimes at both ends (dashed circles, four in total). These two isolated regions are elongated over the wide inclination and azimuth range on the $\theta_B$-$\chi_B$ plane. Different magnetic parameters give rise to the same linear polarization signals in these two regions. This can be confirmed by the Hanle diagram in Figure \[fig:diagram1\] (a). If we assume that the magnetic field is horizontal ($\theta_B=90^{\circ}$), as shown in the dashed line in Figure \[fig:2dchi\] (a), two solutions are possible: one for $\chi_B=120^{\circ}$ and $B=50$ G (shown in gray circle) and the other for $\chi_B=225^{\circ}$ and $B=15$ G. Indeed, the Hanle diagram with $\chi_B=120^{\circ}$ and $B=50$ G intersects with that with $\chi_B=225^{\circ}$ and $B=15$ G, suggesting the presence of ambiguity.
Below $\sim50$ G, Figure \[fig:2dchi\] (b) shows that one $1\sigma$ confidence level region, which extends from $\theta_B\sim20^{\circ}$ to $\theta_B\sim150^{\circ}$, has an apex at $B\sim15$ G and $\theta_B\sim90^{\circ}$. Figure \[fig:2dchi\] (b) further shows that another $1\sigma$ confidence level region, which extends in the $\theta_B=30^{\circ}$ to $140^{\circ}$ range, possesses two vertices at $\theta_B\sim60^{\circ}$ and $\theta_B\sim120^{\circ}$. The elongated shape over the inclination and field strength shows a strong *correlation* between these two magnetic parameters; both a weak and inclined field and a stronger and less inclined field provide equally good fitting to the observed spectra. On the $\chi_B$-$B$ plane, the two $1\sigma$ confidence level regions have a V-shape with apexes at $\chi_B\sim120^{\circ}$ and $B\sim40$ G and at $\chi_B\sim220^{\circ}$ and $B\sim15$ G. This also suggests that there is a correlation between the azimuth and field strength and that the wide field strength range is consistent with the data. We notice that the azimuth converges to $120^{\circ}$ or $220^{\circ}$ when the field strength becomes weaker. In general, there is strong correlation among these three magnetic parameters, and it is difficult to uniquely determine a set of these magnetic parameters.
### Connection between saturation and non-saturation regimes
We find four-fold ambiguity in the saturation regime ($B>50$ G) and two-fold ambiguity in the non-saturation regime ($B<50$ G). As we clearly show in Figure \[fig:3dchi\], a pair of saturated regimes converges into one of the non-saturated regimes (there are two sets of connections). This connectivity indicates that the Stokes $Q/I$ and $U/I$ profiles for a strong and less inclined magnetic field is similar to those of a weaker and more inclined field. This transition suggests that the degree of the Hanle effect remains the same among strong, less inclined fields and weaker, more inclined fields. In summary, multiple solutions are possible over a broad field strength range. In the saturation regime ($B>50$ G), we can only determine the azimuth and inclination, although we have multiple solutions in these parameters. As we enter the non-saturation regime, we have strong correlations among three magnetic parameters in addition to the above ambiguity. These situations make it difficult to uniquely determine the magnetic field vector.
Additional information for constraining magnetic parameters {#mag_constrain}
-----------------------------------------------------------
If there are multiple solutions, one way to uniquely determine the magnetic field vector is to constrain one of parameters using additional observations. With the exception of the saturation regime, the 1$\sigma$ confidence level regions are extended over the three-dimensional parameter space as shown in the 2D $\chi^{2}$ maps in Figure \[fig:2dchi\]. The shape helps us to constrain the magnetic parameters with an additional piece of information. For example, once the azimuth is constrained by other observations, the inclination and field strength will be uniquely determined as inferred from Figure \[fig:2dchi\] (a) and (c). An accuracy of $\pm5^{\circ}$ in the azimuthal direction will be good enough for most cases. However, when $\chi_B=215^{\circ}$ and $\chi_B=120^{\circ}$, the correlation curve on the plane of inclination and azimuth allows a relatively large uncertainty in the inclination (see Fig.\[fig:2dchi\] (a)).
The Lyman-$\alpha$ images of the upper chromosphere obtained with the Very high Angular resolution ULtraviolet Telescope (VAULT) sounding rocket [@Vourlidas2010] show the presence of long thin threads of $\sim10\arcsec$ in the quiet Sun. The Mg II k and Ca II images obtained with the Sunrise FilterImager [SuFI; @Grandorfer2011] revealed fibril structures spreading from the plage regions in the chromosphere [@Riethmuller2013]. As shown by @Leenarts2012 [@Leenarts2013], the magnetic field connecting magnetic concentrations with opposite polarities represents the intensity filamentary structure in their 3D atmospheric model. This indicates that the intensity filamentary structures can be used as proxies of the azimuth of the magnetic fields. Therefore, high-spatial resolution observations around the line forming layer of the Lyman-$\alpha$ line would help us to know the azimuthal direction. The CLASP Lyman-$\alpha$ slit-jaw images, the Interface Region Imaging Spectrograph (IRIS), the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO), and ground-based observations can be used for this purpose.
Inversion with different noise levels {#noise}
-------------------------------------
Here, we investigate the influence of the noise level on the inversion results. Figure \[fig:noise\] represents the dependence of the $\chi^{2}$ values on inclination, azimuth, and field strength for noise levels of $1\sigma=0.1\%$, $1\sigma=0.03\%$, and $1\sigma=0.01\%$. The input parameters for all cases are $B=50$ G, $\theta_B=90^{\circ}$, and $\chi_B=120^{\circ}$. Note that Figure \[fig:noise\] (b) is the same as Figure \[fig:3dchi\]. The $\chi^{2}$ distributions are similar for all noise levels. Above 50 G, four isolated $1\sigma$ confidence regions with $\Delta \chi^{2}\le3.53$ are extended only in the direction of the field strength, representing the saturation regime. Around 50 G, these regions are connected to the two isolated regions with largely inclined, weak fields. Even though we increase the signal-to-noise ratio, the ambiguity remains intact, and we find similar correlation between the magnetic parameters. Thus, we require additional observables to constrain the solution even in low noise situations. The difference caused by different noise levels is equivalent to the thickness of the $1\sigma$ confidence level region. These regions are thinner for lower noise levels, indicating lower uncertainty in the determination of the magnetic parameters. For example, in the saturation regime, the thicknesses correspond to $40^{\circ}$ for $1\sigma=0.1\%$, $\sim$10$^{\circ}$ for $1\sigma=0.033\%$, and $\sim$5$^{\circ}$ for $1\sigma=0.01\%$ (see Figure \[fig:noise\] (d), (e) and (f)).
Inversion for different observing regions and input parameters
--------------------------------------------------------------
To see whether the properties of the $\chi^{2}$ maps (i.e., result of inversion) identified in Section \[chi2property\] depend on the choice of the input parameters, we perform inversions for different sets of input parameters and for different observing locations on the solar disk. We study input parameters with weak field ($B=10$ G), marginal field ($B=50$ G), and strong field ($B=250$ G) with both horizontal ($\theta_B=90^{\circ}$) and almost vertical ($\theta_B=20^{\circ}$) configurations. Note that all azimuths are fixed at $\chi_B=120^{\circ}$. With this set of input parameters, we consider both the close-to-limb case ($\mu=0.3$) and the disk center case ($\mu=1.0$) and execute 12 total inversions.
Figure \[fig:3dchi\_horizontal\] shows $\chi^2$ maps for the case of a horizontal magnetic field. With the exception of the disk center case ($\mu=1.0$) with $B=50$ G (Figure \[fig:3dchi\_horizontal\] (h) and (k)) and $B=250$ G (Figure \[fig:3dchi\_horizontal\] (i) and (l)), the $\chi^2$ distributions are similar to those in Section \[chi2property\], and any field strength, from weak to strong, is consistent with the data. For the disk center, with horizontal magnetic fields of 50 and 250 G, the $1\sigma$ confidence level region appears only in the saturation regime. Thus, in this case, there is no possibility to have a wrong solution with weak magnetic fields. This is distinct advantage; however, is impossible to determine the field strength in this case. The number of ambiguous regions is different depending on the input parameters.
Figure \[fig:3dchi\_vertical\] shows $\chi^2$ maps for the case of vertical magnetic fields ($\theta_B=20^{\circ}$). Again, we find the same properties of $\chi^2$ distributions, indicating that any field strength can be possible as a solution. For the disk center case with $B=10$ G and $B=50$ G (Figure \[fig:3dchi\_vertical\] (g) and (h)), the $1\sigma$ confidence regions with $\Delta\chi^{2}\le3.53$ spread out on the plane with vertical fields ($\theta_B=0^{\circ}$ or $\theta_B=180^{\circ}$) and on the plane with zero magnetic fields (see Figure \[fig:3dchi\_vertical\] (j) and (k)), indicating that the inversion does not work. This result is reasonable because the the magnitude of linear polarization is quite small and below the noise level when the magnetic fields are almost vertical, as shown in Figure \[fig:diagram2\] (b). If we employ a smaller noise level, properties of the $\chi^{2}$ distribution similar to those in Section \[chi2property\] will appear. For CLASP observations, an observing target with a largely inclined and/or strong magnetic field strength is appropriate for the disk center observation.
Discussions
===========
Required additional information
-------------------------------
We have performed Stokes inversion simulations to clarify the information which can be inferred via the Hanle effect in the hydrogen Lyman-$\alpha$ line, assuming that the chromosphere and transition region of the quiet Sun can be represented by the FAL-C semi-empirical model. We conclude that UV spectro-polarimetry with the CLASP experiment is a suitable diagnostic tool of the magnetic field in the upper atmosphere, if combined with complementary information from other relevant observations. Though we have the ambiguity and uncertainty that is inherent to the Hanle effect when only the scattering polarization in one spectral line is available, this should not be taken as a drawback. As we have shown, we need additional observations to uniquely determine the field strength, azimuth, and inclination. Clearly, we cannot measure the very small contribution of the Zeeman effect to the Stokes $V$ of the Lyman-$\alpha$ line, but there are several options for resolving this issue. Ideally, we would like to perform simultaneous spectro-polarimeteric observations also in other spectral lines of the upper chromosphere, which have different critical field strengths for the onset of the Hanle effect [@Trujillo2012; @Belluzzi2012]. However, in this paper, we propose a simpler, but useful, third method for determining the azimuthal magnetic field direction using the fibrils seen in the high-resolution intensity images from IRIS, AIA, and ground-based observations.
Observing target
----------------
The Lyman-$\alpha$ line starts to approach the Hanle saturation regime above $\sim$50 G, where the linear polarization changes only with the inclination and azimuth of the magnetic field, not with its strength. Furthermore, nearly vertical fields do not produce any significant Hanle effect (i.e., the magnetic modification of the linear polarization), and at the solar disk center the linear polarization created by the Hanle effect of slightly inclined fields is too small to be detected. Thus, inclined, relatively weak ($B<50$ G) magnetic fields should be observed. Based on the properties of the Hanle effect studied in this paper, we can now discuss the possible observing region and observing target for the CLASP experiment.
Our primary goal with the CLASP experiment is to detect for the first time the linear polarization caused by the atomic level polarization produced by the absorption and scattering of anisotropic radiation in the upper solar atmosphere. To this end, it is desirable to choose a quiet region close to the limb (e.g., around $\mu{\approx}0.3$) because such locations are the most suitable ones for detecting the line-core polarization in the hydrogen Lyman-$\alpha$ line [@Trujillo2011; @Belluzzi2012b; @Stepan2014]. Our second goal is to detect the Hanle effect, in order to constrain the magnetic field vector of the chromosphere-corona transition region.
One of the popular spectral lines for magnetic field measurements in the upper atmosphere is the He [i]{} 1083 nm triplet [e.g., @Asensio2008]. By exploiting the spectro-polarimetric data obtained with this multiplet, the magnetic properties of prominences, filaments, spicules and active regions have been investigated by several authors [e.g., @Trujillo2002; @Lagg2004; @Marenda2006; @Centeno2010; @Xu2010]. However, it is not easy to measure the intensity and polarization of the He [i]{} 1083 nm triplet in quiet regions of the solar disk [e.g., @Asensio2008], and there are few studies on the quiet-Sun magnetic fields of the upper solar atmosphere. Thus, our primary targets are the network and internetwork regions of the quiet Sun. The network fields are expected to form magnetic canopy structures in the upper chromosphere and transition region, and they are expected to be largely inclined and relatively weak. @Wiegelmann2010 investigated the fine structure of the magnetic fields in the quiet Sun using photospheric magnetic field measurements from the SUNRISE imaging magnetograph experiment (IMaX). @Wiegelmann2010 found that most magnetic loops rooted in the quiet Sun photosphere would reach into the chromosphere or higher. In addition to the canopy field, such magnetic loops in regions of the quiet Sun would also be interesting observing targets.
Atmospheric model
-----------------
Finally, we discuss another issue that we should address further in future investigations: the influence of the atmospheric model on the inference of the magnetic field via the interpretation of the scattering polarization and the Hanle effect in Lyman-$\alpha$. @Belluzzi2012b calculated the scattering polarization profiles of the hydrogen Lyman-$\alpha$ line taking into account partial frequency redistribution (PRD) and $J-$state interference effects, and using the plane-parallel atmospheric models C, F, and P of @Fontenla1993, which can be considered as illustrative of quiet, network, and plage regions. They showed that the shape and amplitude of the Lyman-$\alpha$ linear polarization profiles are sensitive to the thermal structure of the model atmosphere in the line wings, and to a lesser extent also in the line core (where the Hanle effect operates). Thus, in order to determine the importance of the choice of the atmospheric model, we must clarify how much uncertainty arises in the inference of the magnetic field vector when the chosen atmospheric model is different.
It is important to emphasize that the upper solar chromosphere and transition region are highly inhomogeneous and dynamic plasmas. Such inhomogeneity and dynamics causes larger $Q/I$ amplitudes and non-zero $U/I$ signals, along which their spatial and temporal variations [@Stepan2012; @Stepan2014]. Thus, we must consider also other strategies for interpreting the CLASP observations, such as detailed forward modeling of the observed scattering polarization signals using increasingly realistic 3D models of the solar chromosphere, taking into account the limited spatial and temporal resolution of the CLASP observations.
In order to monitor the local non-uniformity of the Lyman-$\alpha$ radiation field, the intensity images from the CLASP slit-jaw and IRIS observations will be useful. Furthermore, the intensity and the linear polarization profiles in the line wings, which are insensitive to the magnetic field but very sensitive to the temperature structure, may also help us to constrain the temperature structure of the solar atmosphere. All these steps will facilitate the interpretation of the line-core polarization signals of Lyman-$\alpha$ that CLASP aims at observing. In this way, we expect that the CLASP experiment will lead to the first significant advancement in the investigation of the magnetism of the upper solar chromosphere and the transition region via the Hanle effect in the UV spectral region.
This work was done during R.I.’s visit to IAC, which was supported by the Grant-in-Aid for “Institutional Program for Young Researcher Overseas Visits” from the Japan Society for the Promotion of Science (JSPS). R.I. was also supported by JSPS KAKENHI Grant number 25887051. J.Š. recognizes support from the Grant Agency of the Czech Republic through the grant P209/12/P741 and the project RVO:67985815. Financial support by the Spanish Ministry of Economy and Competitiveness through projects AYA2010–18029 (Solar Magnetism and Astrophysical Spectropolarimetry) and Consolider-Ingenio CSD2009-00038 (Molecular Astrophysics: The Herschel and Alma Era) are gratefully acknowledged. AAR also acknowledges financial support through the Ramón y Cajal fellowships. The CLASP experiment has been finantially supported by a Grant-in-Aid for Scientific Research (S).
Anusha, L. S., Nagendra, K. N., Bianda, M., et al. 2011, , 737, 95 Asensio Ramos, A., Trujillo Bueno, J., & Landi Degl’Innocenti, E. 2008, , 683, 542 Belluzzi, L., & Trujillo Bueno, J. 2012, , 750, L11 Belluzzi, L., Trujillo Bueno, J., & [Š]{}t[ě]{}p[á]{}n, J. 2012, , 755, L2 Carlsson, M., & Stein, R. F. 1997, , 481, 500 Casini, R., & Landi Degl’Innocenti, E. 2008, in Plasma Polarization Spectroscopy, ed. T. Fujimoto, & A. Iwamae (Atomic, Optical, and Plasma Physics), 44 Centeno, R., Trujillo Bueno, J., & Asensio Ramos, A. 2010, , 708, 1579 De Pontieu, B., McIntosh, S. W., Carlsson, M., et al. 2007, Science, 318, 1574 Fontenla, J. M., Avrett, E. H., & Loeser, R. 1993, , 406, 319 Gandorfer, A., Grauf, B., Barthol, P., et al. 2011, , 268, 35 Ishikawa, R., Bando, T., Fujimura, D., et al. 2011, Solar Polarization 6, 437, 287 Kano, R., Bando, T., Narukage, N., et al. 2012, , 8443, Katsukawa, Y., Berger, T. E., Ichimoto, K., et al. 2007, Science, 318, 1594 Kobayashi, K., Kano, R., Trujillo Bueno, J., et al. 2012, in Hinode 5, ASP Conf. Series Vol. 456, ed. Leon Golub, Ineke De Moortel, & Toshifumi Shimizu. San Francisco: Astronomical Society of the Pacific, 2012., p.233 Lagg, A., Woch, J., Krupp, N., & Solanki, S. K. 2004, , 414, 1109 Landi Degl’Innocenti, E., & Landolfi, M. 2004, Astrophysics and Space Science Library, 307, Landi Degl’Innocenti, E. 1982, , 79, 291 Landi Degl’Innocenti, E., & Bommier, V. 1993, , 411, L49 Leenaarts, J., Carlsson, M., & Rouppe van der Voort, L. 2012, , 749, 136 Leenaarts, J., Pereira, T. M. D., Carlsson, M., Uitenbroek, H., & De Pontieu, B. 2013, , 772, 90 L[ó]{}pez Ariste, A., & Casini, R. 2003, , 582, L51 Merenda, L., Trujillo Bueno, J., Landi Degl’Innocenti, E., & Collados, M. 2006, , 642, 554 Narukage, N., Tsuneta, S., Bando, T., et al. 2011, , 8148, Okamoto, T. J., & De Pontieu, B. 2011, , 736, L24 Okamoto, T. J., Tsuneta, S., Berger, T. E., et al. 2007, Science, 318, 1577 , W. H., [Teukolsky]{}, S. A., [Vetterling]{}, W. T., & [Flannery]{}, B. P. 2007, [Numerical recipes: the art of scientific computing]{}, Cambridge University Press, pp 812–815 Riethm[ü]{}ller, T. L., Solanki, S. K., Hirzberger, J., et al. 2013, , 776, L13 , K., [Nakamura]{}, T., [Matsumoto]{}, T., [Otsuji]{}, K., [Okamoto]{}, T. J., [Nishizuka]{}, N., [Kawate]{}, T., [Watanabe]{}, H., [Nagata]{}, S., [UeNo]{}, S., [Kitai]{}, R., [Nozawa]{}, S., [Tsuneta]{}, S., [Suematsu]{}, Y., [Ichimoto]{}, K., [Shimizu]{}, T., [Katsukawa]{}, Y., [Tarbell]{}, T. D., [Berger]{}, T. E., [Lites]{}, B. W., [Shine]{}, R. A., & [Title]{}, A. M. 2007, Science, 318, 1591 t[ě]{}p[á]{}n, J., & Trujillo Bueno, J. 2011, , 732, 80 t[ě]{}p[á]{}n, J., Trujillo Bueno, J., Carlsson, M., & Leenaarts, J. 2012, , 758, L43 t[ě]{}p[á]{}n, J., Trujillo Bueno, J., Leenaarts, J., & Carlsson, M. 2014, submitted Trujillo Bueno, J., Landi Degl’Innocenti, E., Collados, M., Merenda, L., & Manso Sainz, R. 2002, , 415, 403 Trujillo Bueno, J., [Š]{}t[ě]{}p[á]{}n, J., & Casini, R. 2011, , 738, L11 Trujillo Bueno, J., [Š]{}t[ě]{}p[á]{}n, J., & Belluzzi, L. 2012, , 746, L9 Vecchio, A., Cauzzi, G., & Reardon, K. P. 2009, , 494, 269 Vourlidas, A., Sanchez Andrade-Nu[ñ]{}o, B., Landi, E., et al. 2010, , 261, 53 Wiegelmann, T., Solanki, S. K., Borrero, J. M., et al. 2010, , 723, L185 Xu, Z., Lagg, A., & Solanki, S. K. 2010, , 520, A77
|
---
abstract: 'Markov chain Monte Carlo (MCMC) methods provide consistent approximations of integrals as the number of iterations goes to infinity. MCMC estimators are generally biased after any fixed number of iterations, which complicates both parallel computation and the construction of confidence intervals. We propose to remove this bias by using couplings of Markov chains together with a telescopic sum argument of Glynn & Rhee (2014). The resulting unbiased estimators can be computed in parallel, with confidence intervals following directly from the Central Limit Theorem for i.i.d. variables. We discuss practical couplings for popular algorithms such as Metropolis-Hastings, Gibbs samplers, and Hamiltonian Monte Carlo. We establish the theoretical validity of the proposed estimators and study their efficiency relative to the underlying MCMC algorithms. Finally, we illustrate the performance and limitations of the method on toy examples, a variable selection problem, and an approximation of the cut distribution arising in Bayesian inference for models made of multiple modules.'
author:
- 'Pierre E. Jacob[^1], John O’Leary[^2], Yves F. Atchadé[^3]'
bibliography:
- 'Biblio.bib'
title: Unbiased Markov chain Monte Carlo with couplings
---
Context
=======
Markov chain Monte Carlo (MCMC) methods constitute a popular class of algorithms to approximate high-dimensional integrals such as those arising in statistics and many other fields [@liu2008monte; @robert2004monte; @brooks2011handbook; @green2015bayesian]. These iterative methods provide estimators that are consistent in the limit of the number of iterations but potentially biased for any fixed number of iterations, as the Markov chains are rarely started at stationarity. This “burn-in” bias limits the potential gains from running independent chains in parallel [@rosenthal2000parallel]. Consequently, efforts have focused on exploiting parallel processors within each iteration [@tjelmeland2004using; @brockwell2006parallel; @lee2010utility; @jacob2011using; @calderhead2014general; @goudie2017massively; @yang2017parallelizable], or on the design of parallel chains targeting different distributions [@altekar2004parallel; @wang2015parallelizing; @srivastava2015wasp]. Nevertheless, MCMC estimators are ultimately justified by asymptotics in the number of iterations. This imposes a severe limitation on the scalability of MCMC methods on modern computing hardware, with increasingly many processors and stagnating clock speeds.
We propose a general construction of unbiased estimators of integrals with respect to a target probability distribution using MCMC kernels. Thanks to the lack of bias, estimators can be generated independently in parallel and averaged over, thus achieving the standard Monte Carlo convergence rate as the number of parallel replicates goes to infinity. Confidence intervals can be constructed via the standard Central Limit Theorem (CLT) for i.i.d. variables, asymptotically valid in the number of parallel replicates, in contrast with confidence intervals for the standard MCMC approach. Indeed these are justified asymptotically in the number of iterations [e.g. @flegal2008markov; @gongflegal; @atchade2016markov; @vats2018strong], although they might also provide useful guidance in the non-asymptotic regime.
Our contribution follows the path-breaking work of @glynn2014exact, which demonstrates the unbiased estimation of integrals with respect to an invariant distribution using couplings. Their construction is illustrated on Markov chains represented by iterated random functions, and leverages the contraction properties of such functions. @glynn2014exact also consider Harris recurrent chains for which an explicit minorization condition holds. Previously, @McLeish:2011 employed similar debiasing techniques to obtain “nearly unbiased” estimators from a single MCMC chain. More recently @jacob2017smoothing remove the bias from particle Gibbs samplers [@andrieu:doucet:holenstein:2010] targeting the smoothing distribution in state-space models, by coupling chains such that they meet exactly in finite time without analytical knowledge on the underlying Markov kernels. The present article brings this type of Rhee & Glynn estimators to generic MCMC algorithms, along with new unbiased estimators with reduced variance. The proposed construction involves couplings of MCMC chains, which we provide for various algorithms, including Metropolis–Hastings, Gibbs and Hamiltonian Monte Carlo samplers.
Couplings of MCMC algorithms have been used to study their convergence properties, from both theoretical and practical points of view [e.g. @reutter1995general; @johnson1996studying; @rosenthal1997faithful; @johnson1998coupling; @neal1999circularly; @roberts2004general; @johnson2004; @johndrow2017coupling]. Couplings of Markov chains also underpin perfect samplers [@propp:wilson:1996; @murdoch1998exact; @casella:lavine:robert:2001; @flegal2012exact; @leedoucetperfectsimulation; @huber2016perfect]. A notable difference of the proposed approach is that only two chains have to be coupled for the proposed estimator to be unbiased, without further assumptions on the state space or on the target distribution. Thus the approach applies more broadly than perfect samplers [see @glynn2016exact], while yielding unbiased estimators rather than exact samples. Couplings of pairs of Markov chains also formed the basis of the approach of @neal1999circularly, with a similar motivation for parallel computation.
In Section \[sec:Two-coupled-chains\], we introduce the estimators and a coupling of random walk Metropolis–Hastings chains as an illustration. In Section \[sec:theory\], we establish properties of these estimators under certain assumptions. In Section \[sec:Practical-couplings\], we propose couplings of popular MCMC algorithms, using maximal couplings and common random number strategies. In Section \[sec:illustrations\], we demonstrate the applicability of our approach with examples including a bimodal distribution and a classic Gibbs sampler for nuclear pump failure data. We then consider more challenging tasks including variable selection in high dimension and the approximation of the cut distribution that arises in inference for models made of modules. [@liu2009; @plummer2014cuts; @jacob2017modularization]. We summarize and discuss our findings in Section \[sec:discussion\]. Scripts in `R` [@RCRAN] are available online[^4], and supplementary materials with extra numerical illustrations are available on the first author’s webpage.
Unbiased estimation from coupled chains\[sec:Two-coupled-chains\]
=================================================================
Basic “Rhee-Glynn” estimator \[subsec:Basic-construction\]
----------------------------------------------------------
Given a target probability distribution $\pi$ on a Polish space $\mathcal{X}$ and a measurable real-valued test function $h$ integrable with respect to $\pi$, we want to estimate the expectation $\mathbb{E}_\pi[h(X)]=\int h(x)\pi(dx)$. Let $P$ denote a Markov transition kernel on $\mathcal{X}$ that leaves $\pi$ invariant, and let $\pi_0$ be some initial probability distribution on $\mathcal{X}$. Our estimators are based on a coupled pair of Markov chains $(X_{t})_{t\geq0}$ and $(Y_{t})_{t\geq0}$, which marginally start from $\pi_0$ and evolve according to $P$. More specifically, let $\bar P$ be a transition kernel on the joint space $\mathcal{X}\times\mathcal{X}$ such that $\bar{P}((x,y),A\times \mathcal{X}) = P(x,A)$ and $\bar{P}((x,y),\mathcal{X}\times A) = P(y,A)$ for any $x,y\in\mathcal{X}$ and measurable set $A$. We then construct the coupled Markov chain $(X_t,Y_t)_{t\geq 0}$ as follows. We draw $(X_0,Y_0)$ such that $X_0\sim \pi_0$, and $Y_0\sim \pi_0$. Given $(X_0,Y_0)$, we draw $X_1\sim P(X_0, \cdot)$. For any $t\geq 1$, given $X_0, (X_1,Y_0),\ldots,(X_t,Y_{t-1})$, we draw $(X_{t+1},Y_{t}) \sim \bar{P}((X_{t},Y_{t-1}),\cdot)$. We consider the following assumptions.
\[assumption:marginaldistributions\] As $t\to\infty$, $\mathbb{E}[h(X_{t})]\to\mathbb{E}_\pi[h(X)]$. Furthermore, there exists an $\eta>0$ and $D<\infty$ such that $\mathbb{E}[|h(X_{t})|^{2+\eta}]\leq D$ for all $t\geq0$.
\[assumption:meetingtime\] The chains are such that the meeting time $\tau:=\inf\{t\geq1:\ X_{t}=Y_{t-1}\}$ satisfies $\mathbb{P}(\tau>t)\leq C\;\delta^{t}$ for all $t\geq 0$, for some constants $C<\infty$ and $\delta\in(0,1)$.
\[assumption:sticktogether\]The chains stay together after meeting, i.e. $X_{t}=Y_{t-1}$ for all $t\geq\tau$.
By construction, each of the two marginal chains $(X_t)_{t\geq 0}$ and $(Y_t)_{t\geq 0}$ has initial distribution $\pi_0$ and transition kernel $P$. Assumption \[assumption:marginaldistributions\] requires these chains to result in a uniformly bounded $(2+\eta)$-moment of $h$; more discussion on moments of Markov chains can be found in @tweedie1983existence. Since $X_0$ and $Y_0$ may be drawn from any coupling of $\pi_0$ with itself, it is possible to set $X_0 = Y_0$. However, $X_1$ is then generated from $P(X_0,\cdot)$, so that $X_1 \neq Y_0$ in general. Thus one cannot force the meeting time to be small by setting $X_0 = Y_0$. Assumption \[assumption:meetingtime\] puts a condition on the coupling operated by $\bar{P}$, and would not be satisfied for an independent coupling. Coupled kernels that satisfy Assumption \[assumption:meetingtime\] can be designed using e.g. common random numbers and maximal couplings. We present a simple case in Section \[subsec:exampleMH\] and further examples in Section \[sec:Practical-couplings\]. We stress that the state space is not assumed to be discrete, and that the constants $D$ and $\eta$ of Assumption \[assumption:marginaldistributions\] and $C$ and $\delta$ of Assumption \[assumption:meetingtime\] do not need to be known to implement the proposed approach. Assumption \[assumption:sticktogether\] typically holds by design; coupled chains satisfying this assumption are termed “faithful” in @rosenthal1997faithful.
Under these assumptions we introduce the following motivation for an unbiased estimator of $\mathbb{E}_\pi[h(X)]$, following @glynn2014exact. We begin by writing $\mathbb{E}_\pi[h(X)]$ as $\lim_{t\to\infty}\mathbb{E}[h(X_{t})]$. Then for any fixed $k\geq 0$, $$\begin{aligned}
\mathbb{E}_\pi[h(X)] & =\mathbb{E}[h(X_{k})]+\sum_{t=k+1}^{\infty}(\mathbb{E}[h(X_{t})]-\mathbb{E}[h(X_{t-1})]) & \text{expanding the limit as a telescoping sum,}\\
& =\mathbb{E}[h(X_{k})]+\sum_{t=k+1}^{\infty}(\mathbb{E}[h(X_{t})]-\mathbb{E}[h(Y_{t-1})]) & \text{since the chains have the same marginals,}\\
& =\mathbb{E}[h(X_{k})+\sum_{t=k+1}^{\infty}(h(X_{t})-h(Y_{t-1}))\text{]} & \text{swapping the expectations and limit,}\\
& =\mathbb{E}[h(X_{k})+\sum_{t=k+1}^{\tau-1}(h(X_{t})-h(Y_{t-1}))] & \text{since the terms corresponding to \ensuremath{t\geq\tau} are null.}\end{aligned}$$ We note that the sum in the last equation is zero if $k+1>\tau-1$. The heuristic argument above suggests that the estimator ${H_{k}(X,Y)=h(X_{k})+\sum_{t=k+1}^{\tau-1}(h(X_{t})-h(Y_{t-1}))}$ should have expectation $\mathbb{E}_\pi[h(X)]$.
This estimator requires $\tau$ calls to $\bar{P}$ and $\max(1,k+1-\tau)$ calls to $P$; thus under Assumption \[assumption:meetingtime\] its cost has a finite expectation. In Section \[sec:theory\] we establish the validity of the estimator under the three conditions above; this formally justifies the swap of expectation and limit. The estimator can be viewed as a debiased version of $h(X_k)$, where the term $\sum_{t=k+1}^{\tau-1}(h(X_{t})-h(Y_{t-1}))$ acts as bias correction. Thanks to this unbiasedness property, we can sample $R\in \mathbb{N}$ independent copies of $H_k(X,Y)$ in parallel and average the results to estimate $\mathbb{E}_\pi[h(X)]$. Unbiasedness is guaranteed for any choice of $k\geq 0$, but both cost and variance of $H_k(X,Y)$ are sensitive to $k$.
Before presenting examples and enhancements to the estimator above, we discuss the relationship between our approach and existing work. There is a rich literature applying forward couplings to study Markov chains convergence [@johnson1996studying; @johnson1998coupling; @thorisson2000coupling; @lindvall2002lectures; @rosenthal2002quantitative; @johnson2004; @doucetal04], and to obtain new algorithms such as perfect samplers [@huber2016perfect] and the methods of @neal1999circularly and @neal2001improving. Our approach is closely related to @glynn2014exact, who employ pairs of Markov chains to obtain unbiased estimators. The present work combines similar arguments with couplings of MCMC algorithms and proposes further improvements to remove bias at a reduced loss of efficiency.
Indeed @glynn2014exact did not apply their methodology to the MCMC setting. They consider chains associated with contractive iterated random functions [see also @diaconis1999iterated], and Harris recurrent chains with an explicit minorization condition. A minorization condition refers to a small set $\mathcal{C}$, $\lambda>0$, an integer $m\geq 1$, and a probability measure $\nu$ such that, for all $x\in \mathcal{C}$ and measurable set $A$, $P^m(x,A) \geq \lambda \nu(A)$. It is explicit if the set, constant and probability measure are known by the user. Finding explicit small sets that are practically useful is a challenging technical task, even for MCMC experts. If available, explicit minorization conditions could also be employed to identify regeneration times, leading to unbiased estimators amenable to parallel computation in the framework of @mykland1995regeneration and @brockwell:kadane:05. By contrast @johnson1996studying [@johnson1998coupling; @neal1999circularly] more explicitly address the question of coupling MCMC algorithms such that pairs of chains meet exactly, without analytical knowledge on the target distribution. The present article focuses on the use of these couplings in the framework of @glynn2014exact.
Coupled Metropolis–Hastings example\[subsec:exampleMH\]
-------------------------------------------------------
Before further examination of our estimator and its properties, we present a coupling of Metropolis–Hastings (MH) chains that will typically satisfy Assumptions \[assumption:marginaldistributions\]-\[assumption:sticktogether\] in realistic settings; this coupling was proposed in @johnson1998coupling as part of a method to diagnose convergence. We postpone discussion of couplings for other MCMC algorithms to Section \[sec:Practical-couplings\]. We recall that each iteration $t$ of the MH algorithm [@hastings:1970] begins by drawing a proposal $X^{\star}$ from a Markov kernel $q(X_{t},\cdot)$, where $X_t$ is the current state. The next state is set to $X_{t+1}=X^{\star}$ if ${U \leq \pi(X^{\star})q(X^{\star},X_{t})}/{\pi(X_{t})q(X_{t},X^{\star})}$, where $U$ denotes a uniform random variable on $[0,1]$, and $X_{t+1}=X_{t}$ otherwise.
We define a pair of chains so that each proceeds marginally according to he MH algorithm and jointly so that the chains will meet exactly after a random number of steps. We suppose the pair of chains are in states $X_t$ and $Y_{t-1}$, and consider how to generate $X_{t+1}$ and $Y_t$ so that $\{X_{t+1}=Y_{t}\}$ might occur.
If $X_t\neq Y_{t-1}$, the event $\{X_{t+1}=Y_{t}\}$ cannot occur if both chains reject their respective proposals, $X^\star$ and $Y^\star$. Meeting will occur if these proposals are identical and if both are accepted. Marginally, the proposals follow ${X^{\star}|X_{t}\sim q(X_{t},\cdot)}$ and $Y^{\star}|Y_{t-1}\sim q(Y_{t-1},\cdot)$. If $q(x,x^\star)$ can be evaluated for all $x,x^\star$, then one can sample from the maximal coupling between the two proposal distributions, which is the coupling of $q(X_{t},\cdot)$ and $q(Y_{t-1},\cdot)$ maximizing the probability of the event $\{X^\star = Y^\star\}$. How to sample from maximal couplings of continuous distributions is well-known [@thorisson2000coupling] and described in Section \[subsec:maximalcoupling\] for completeness. One can accept or reject the two proposals using a common uniform random variable $U$. The chains will stay together after they meet: at each step after meeting, the proposals will be identical with probability one, and jointly accepted or rejected with a common uniform variable. This coupling does not require explicit minorization conditions, nor contractive properties of a random function representation of the chain.
Time-averaged estimator \[subsec:Improvements\]
-----------------------------------------------
To motivate our next estimator, we note that we can compute $H_{k}(X,Y)$ for several values of $k$ from the same realization of the coupled chains, and that the average of these is unbiased as well. For any fixed integer $m$ with $m\geq k$, we can run coupled chains for $\max(m,\tau)$ iterations, compute the estimator $H_{\ell}(X,Y)$ for each $\ell\in\{k,\ldots,m\}$, and take the average ${H_{k:m}(X,Y)=(m-k+1)^{-1}\sum_{\ell=k}^{m}H_{\ell}(X,Y)}$, as we summarize in Algorithm \[alg:Unbiased-estimator-avg\]. We refer to $H_{k:m}(X,Y)$ as the *time-averaged estimator*; the estimator $H_k(X,Y)$ is retrieved when $m=k$. Alternatively we could average the estimators $H_{\ell}(X,Y)$ using weights $w_\ell\in \mathbb{R}$ for $\ell\in\{k,\ldots,m\}$, to obtain $\sum_{\ell = k}^m w_\ell H_\ell(X,Y)$. This will be unbiased if $\sum_{\ell = k}^m w_\ell = 1$. The computation of weights to minimize the variance of $\sum_{\ell = k}^m w_\ell H_\ell(X,Y)$ for a given test function $h$ is an open question.
1. Draw $X_{0}$,$Y_{0}$ from an initial distribution $\pi_{0}$ and draw $X_{1}\sim P(X_{0},\cdot)$.
2. Set $t = 1$. While $t < \max(m,\tau)$, where $\tau=\inf\{t\geq1:\;X_{t}=Y_{t-1}\}$,
- draw $(X_{t+1},Y_{t})\sim\bar{P}((X_{t},Y_{t-1}),\cdot)$,
- set $t \leftarrow t + 1$.
3. For each $\ell\in\left\{ k,...,m\right\} $, compute $H_{\ell}(X,Y) = h(X_{\ell})+\sum_{t=\ell+1}^{\tau-1}(h(X_{t})-h(Y_{t-1}))$.
4. Return $H_{k:m}(X,Y) = (m-k+1)^{-1}\sum_{\ell=k}^{m}H_{\ell}(X,Y)$; or compute $H_{k:m}(X,Y)$ with .
Rearranging terms in $(m-k+1)^{-1}\sum_{\ell=k}^{m}H_{\ell}(X,Y)$, we can write the time-averaged estimator as $$\begin{aligned}
\label{eq:averageestimator}
H_{k:m}(X,Y) & =\frac{1}{m-k+1}\sum_{\ell=k}^{m}h(X_{\ell}) +
\sum_{\ell=k}^{\tau-1}\min\left(1, \frac{\ell-k+1}{m-k+1}\right)(h(X_{\ell + 1}) - h(Y_{\ell})).\end{aligned}$$ The term $(m-k+1)^{-1}\sum_{\ell=k}^{m}h(X_{\ell})$ corresponds to a standard MCMC average with $m$ total iterations and a burn-in period of $k-1$ iterations. We can interpret the other term as a bias correction. If $\tau\leq k+1$, then the correction term equals zero. This provides some intuition about the choice of $k$ and $m$: large $k$ values lead to the bias correction being equal to zero with large probability, and large values of $m$ result in $H_{k:m}(X,Y)$ being similar to an estimator obtained from a long MCMC run. Thus we expect the variance of $H_{k:m}(X,Y)$ to be similar to that of MCMC for appropriate choices of $k$ and $m$.
The estimator $H_{k:m}(X,Y)$ requires $\tau$ calls to $\bar{P}$ and $\max(1,m+1-\tau)$ calls to $P$, which is comparable to $m$ calls to $P$ when $m$ is large. Thus both the variance and the cost of $H_{k:m}(X,Y)$ will approach those of MCMC estimators for large values of $k$ and $m$. This motivates the use of the estimator $H_{k:m}(X,Y)$ with $m>k$, since the time-averaged estimator allows us to limit the loss of efficiency associated with the removal of the burn-in bias. We discuss the choice of $k$ and $m$ in further detail in Section \[sec:theory\] and in the experiments.
Practical considerations \[sec:histograms\]
-------------------------------------------
Once we have run the first two steps of Algorithm \[alg:Unbiased-estimator-avg\], we can store $X_{k}$ and $(X_{t},Y_{t-1})$ for $k+1\leq t\leq m$ for later use: the test function $h$ does not have to be specified at run-time.
One typically resorts to thinning the output of an MCMC sampler if the test function of interest is unknown at run-time, if the memory cost of storing long chains is prohibitive, or if the cost of evaluating the test function of interest is significant compared to the cost of each MCMC iteration [e.g. @owen2017statistically]. This is also possible in the proposed framework: one can consider a variation of Algorithm \[alg:Unbiased-estimator-avg\] where each call to the Markov kernels $P$ and $\bar{P}$ would be replaced by multiple calls.
Algorithm \[alg:Unbiased-estimator-avg\] terminates after $\tau$ calls to $\bar{P}$ and $\max(1,m+1-\tau)$ calls to $P$. For the proposed couplings, calls to $\bar{P}$ are approximately twice as expensive as calls to $P$. Therefore, the cost of $H_{k:m}(X,Y)$ is comparable to $2\tau + \max(1,m+1-\tau)$ iterations of the underlying MCMC algorithm. This cost is random and will generally depend on the specific coupling underlying the estimator.
As in regular Monte Carlo estimation, the use of a fixed computation budget yielding a random number of complete estimator calculations requires care. The naive approach – to take the average of completed estimators and discard ongoing calculations – can produce biased results [@Glynn1990]. Still, unbiased estimation is possible, as in Corollary 7 of the aforementioned article.
In addition to estimating integrals, it is often of interest to visualize the target distribution. We use our estimator to construct histograms for the marginal distributions of $\pi$ by targeting $\mathbb{E}_\pi[\mathds{1}(X(i)\in A)]$ for various intervals $A$, where $X(i)$ denotes the $i$-th component of $X$. We can also obtain confidence intervals for these histogram probabilities by computing the variance of the estimators of $\mathbb{E}_{\pi}[\mathds{1}(X(i) \in A)]$. Such histograms are presented in Section \[sec:illustrations\] with $95$% confidence intervals as grey vertical boxes and point estimates as black vertical bars. Note that the proposed estimators can take values outside the range of the test function $h$, so that the proposed histograms may include negative values as probability estimates; see @jacob2015nonnegative on the possibility of non-negative unbiased estimators.
Signed measure estimator \[sec:signedmeasure\]
----------------------------------------------
We can formulate the proposed estimation procedure in terms of a signed measure $\hat{\pi}$ defined by $$\begin{aligned}
\label{eq:piestimator}
\hat{\pi}(\cdot) & = \frac{1}{m-k+1}\sum_{\ell=k}^{m} \delta_{X_{\ell}}(\cdot) +
\sum_{\ell=k}^{\tau - 1}\min\left(1, \frac{\ell-k+1}{m-k+1}\right)( \delta_{X_{\ell + 1}}(\cdot) - \delta_{Y_{\ell}}(\cdot)),\end{aligned}$$ obtained by replacing test function evaluations by delta masses in , as in Section 4 of @glynn2014exact. The measure $\hat{\pi}$ is of the form $\hat{\pi}(\cdot) = \sum_{\ell = 1}^N \omega_\ell \delta_{Z_\ell}(\cdot)$ with $\sum_{\ell=1}^N \omega_\ell = 1$ and where the atoms $(Z_\ell)$ are values among $(X_t)$ and $(Y_t)$. Some of the weights $(\omega_\ell)$ might be negative, making $\hat{\pi}$ a signed empirical measure. The unbiasedness property states $\mathbb{E}[\sum_{\ell = 1}^N \omega_\ell h(Z_\ell)]= \mathbb{E}_\pi[h(X)]$.
One can consider the convergence behavior of $\hat{\pi}^R(\cdot) = R^{-1} \sum_{r=1}^R \hat{\pi}^{(r)}(\cdot)$ towards $\pi$, where $(\hat{\pi}^{(r)})$ are independent replications of $\hat{\pi}$. @glynn2014exact obtain a Glivenko–Cantelli result for a similar measure related to their estimator. In the current setting, assume for simplicity that $\pi$ is univariate or else consider only one of its marginals. We redefine the weights and the atoms to write $\hat{\pi}^R(\cdot) =
\sum_{\ell = 1}^{N_R} \omega_\ell \delta_{Z_\ell}(\cdot)$. Introduce the function $s\mapsto
\hat{F}^R(s) = \sum_{\ell = 1}^{N_R} \omega_\ell \mathds{1}(Z_{\ell} \leq
s)$ on $\mathbb{R}$. Proposition \[prop:glivenko\] states that $\hat{F}^R$ converges to $F$ uniformly with probability one, where $F$ is the cumulative distribution function of $\pi$.
The function $s\mapsto \hat{F}^R(s)$ is not monotonically increasing because of negative weights among $(\omega_\ell)$. Therefore, for any $q\in(0,1)$ there might more than one index $\ell$ such that $\sum_{i=1}^{\ell-1} \omega_i \leq q$ and $\sum_{i=1}^{\ell} \omega_i > q$; the quantile estimate might be defined as $Z_\ell$ for any such $\ell$. The convergence of $\hat{F}^R$ to $F$ indicates that all such estimates are expected to converge to the $q$-th quantile of $\pi$. Therefore the signed measure representation leads to a way of estimating quantiles of the target distribution in a consistent way as $R\to \infty$. The construction of confidence intervals for these quantiles, perhaps by bootstrapping the $R$ independent copies, stands as an interesting area for future research.
Another route to estimate quantiles of $\pi$ would be to project $\hat{\pi}^R$, or some of its marginals, onto the space of probability measures. For instance, one could search for the vector $(\bar{\omega}_\ell)$ in the $N_R$-simplex $\{\bar{\omega}_\ell, \ell = 1,\ldots,N_R: \bar{\omega}_\ell \geq 0, \sum_{\ell = 1}^{N_R} \bar{\omega}_\ell = 1 \}$ such that $\bar{\pi}^R(\cdot) = \sum_{\ell = 1}^{N_R} \bar{\omega}_\ell \delta_{Z_{\ell}}(\cdot)$ is closest to $\hat{\pi}^R(\cdot)$, in some sense. That sense could be a generalization of the Wasserstein metric to signed measures [@mainini2012description]. Another option would be to estimate $F$ using isotonic regression [@chatterjee2015risk], considering $\hat{F}^R(s)$ for various values $s$ as noisy measurements of $F(s)$; this amounts to another projection of $\hat{\pi}^R(\cdot)$ onto probability measures. One could hope that as $\hat{\pi}^R$ approaches $\pi$, the projection $\bar{\pi}^R$ would also converge to $\pi$, preserving consistency in $R\to \infty$. In that case, $(\bar{\omega}_\ell, Z_\ell)_{\ell=1}^{N_R}$ are weighted samples which can be used to approximate quantiles or plot histograms approximating $\pi$. Another appeal of $\bar{\pi}^R$ is that weighted averages $\sum_{i=1}^{N_R} \bar{\omega}_\ell h(Z_{\ell})$ are guaranteed to take values in the convex hull of the range of $h$.
Theoretical properties and guidance \[sec:theory\]
==================================================
We state our main result for the estimator $H_k(X,Y)$, which extends directly to $H_{k:m}(X,Y)$.
\[prop:unbiased\]Under Assumptions \[assumption:marginaldistributions\]-\[assumption:sticktogether\], for all $k\geq 0$, the estimator $H_{k}(X,Y)$ has expectation $\mathbb{E}_\pi[h(X)]$, a finite variance, and a finite expected computing time.
From the proof of Proposition \[prop:unbiased\], it is clear that Assumption \[assumption:meetingtime\] could be weakened: geometric tails of the meeting time are sufficient but not necessary. The main consequence of Proposition \[prop:unbiased\] is that an average of $R$ independent copies of $H_{k:m}(X,Y)$ converges to $\mathbb{E}_\pi[h(X)]$ as $R\to \infty$, and that a central limit theorem holds.
Concerning the signed measure estimator of , following @glynn2014exact we provide Proposition \[prop:glivenko\], which applies to univariate target distributions or to marginals of the target.
\[prop:glivenko\] Under Assumptions \[assumption:meetingtime\]-\[assumption:sticktogether\], for all $m\geq k \geq 0$, and assuming that $(X_t)_{t\geq 0}$ converges to $\pi$ in total variation, introduce the function $s\mapsto \hat{F}^R(s) = \sum_{\ell = 1}^{N_R} \omega_\ell \mathds{1}(Z_{\ell} \leq s)$, where $(\omega_\ell, Z_{\ell})_{\ell=1}^{N_R}$ are weighted atoms obtained from $R$ independent copies of $\hat{\pi}$ in . Denote by $F$ the cumulative distribution function of $\pi$. Then $\sup_{s\in\mathbb{R}} |\hat{F}^R(s) - F(s)| \xrightarrow[R\to\infty]{} 0 $ almost surely.
In Section \[subsec:variance\], we discuss the variance and efficiency of $H_{k:m}(X,Y)$, and the effect of $k$ and $m$. In Section \[subsec:Bounding-coupling-probabilities\], we investigate the verification of Assumption \[assumption:meetingtime\] using drift conditions.
Variance and efficiency \[subsec:variance\]
-------------------------------------------
Estimators $H^{(r)}_{k:m}(X,Y)$, for $r=1,\ldots,R$, can be generated in parallel and averaged. More estimators can be produced in a given computing budget if each estimator is cheaper to produce. The trade-off can be understood in the framework of @glynn1992asymptotic, also used in @Rhee:Glynn:2012 and @glynn2014exact, by defining the asymptotic inefficiency as the product of the variance and expected cost of the estimator. Indeed, the product of expected cost and variance is equal to the asymptotic variance of $R^{-1}\sum_{r=1}^R H^{(r)}_{k:m}(X,Y)$ as the computational budget, as opposed to the number of estimators $R$, goes to infinity [@glynn1992asymptotic]. Of primary interest is the comparison of this asymptotic inefficiency with the asymptotic variance of standard MCMC estimators.
We start by writing the time-averaged estimator of as $$\begin{aligned}
H_{k:m}(X,Y) &= \text{MCMC}_{k:m} + \text{BC}_{k:m},\end{aligned}$$ where $\text{MCMC}_{k:m}$ is the MCMC average $(m-k+1)^{-1} \sum_{\ell = k}^m h(X_\ell)$ and $\text{BC}_{k:m}$ is the bias correction term. The variance of $H_{k:m}(X,Y)$ can be written $$\begin{aligned}
\mathbb{V}[H_{k:m}(X,Y)] &= \mathbb{E}\left[(\text{MCMC}_{k:m} - \mathbb{E}_\pi[h(X)])^2 \right]
+ 2 \mathbb{E}\left[(\text{MCMC}_{k:m} - \mathbb{E}_\pi[h(X)]) \text{BC}_{k:m} \right]
+ \mathbb{E}\left[\text{BC}_{k:m}^2 \right].\end{aligned}$$ Defining the mean squared error of the MCMC estimator as $\text{MSE}_{k:m}=\mathbb{E}\left[(\text{MCMC}_{k:m} - \mathbb{E}_\pi[h(X)])^2 \right]$, Cauchy-Schwarz inequality yields $$\label{eq:varianceHkmbound}
\mathbb{V}[H_{k:m}(X,Y)] \leq \text{MSE}_{k:m}
+ 2 \sqrt{\text{MSE}_{k:m}} \sqrt{\mathbb{E}\left[\text{BC}_{k:m}^2 \right]}
+ \mathbb{E}\left[\text{BC}_{k:m}^2 \right].$$ To bound $\mathbb{E}[\text{BC}_{k:m}^2]$ we introduce a geometric drift condition on the Markov kernel $P$.
\[assumption:drift\] The Markov kernel $P$ is $\pi$-invariant, $\varphi$-irreducible and aperiodic, and there exists a measurable function $V:\;\mathcal{X}\to [1,\infty)$, $\lambda\in (0,1)$, $b<\infty$ and a small set $\mathcal{C}$ such that for all $x\in \mathcal{X}$, $$\int P(x,dy) V(y) \leq \lambda V(x) + b\mathds{1}(x \in \mathcal{C}).$$
We refer the reader to [@meyn:tweedie:1993] for the definitions and general concepts of Markov chains, in particular Chapter 5 for aperiodicity, $\varphi$-irreducibility and small sets, and Chapter 15 for geometric drift conditions; see also @roberts2004general. Geometric drift conditions are known to hold for various MCMC algorithms [e.g. @roberts1996exponential; @jarneretroberts07; @atchade05; @khare2013; @choi2013polya; @pal2014]. Assumption \[assumption:drift\] often plays a central role in establishing geometric ergodicity [e.g. Theorem 9 in @roberts2004general]. We show next that this assumption enables an informative bound on $\mathbb{E}[\text{BC}_{k:m}^2]$.
\[prop1\] Suppose that Assumptions \[assumption:meetingtime\]-\[assumption:sticktogether\] and \[assumption:drift\] hold, with a function $V$ for which $\int V(x) \pi_0(dx)$ is finite. If the function $h$ is such that $\sup_{x\in\mathcal{X}} |h(x)|/V(x)^\beta<\infty$ for some $\beta\in [0,1/2)$, then for all $m \geq k\geq 0$, we have $$\mathbb{E}[\text{BC}_{k:m}^2] \leq \frac{C_{\delta,\beta} \delta_\beta^k}{(m-k+1)^2},$$ for some constants $C_{\delta,\beta} < +\infty$, and $\delta_\beta = \delta ^{1-2\beta} \in (0,1)$, with $\delta\in(0,1)$ as in Assumption \[assumption:meetingtime\].
Using Proposition \[prop1\], becomes $$\label{eq:varianceHkmbound2}
\mathbb{V}[H_{k:m}(X,Y)] \leq \text{MSE}_{k:m}
+ 2 \sqrt{\text{MSE}_{k:m}} \frac{\sqrt{C_{\delta,\beta} \delta_\beta^{k}}}{m-k+1}
+\frac{C_{\delta,\beta} \delta_\beta^k}{(m-k+1)^2}.$$ The variance of $H_{k:m}(X,Y)$ is thus bounded by the mean squared error of an MCMC estimator, and additive terms that vanish geometrically when $k$ increases, and polynomially as $m-k$ increases.
In order to facilitate the comparison between the efficiency of $H_{k:m}(X,Y)$ and that of MCMC estimators, we add simplifying assumptions. First, the right-most terms of decrease geometrically with $k$, at a rate driven by $\delta_\beta = \delta^{1-2 \beta}$ where $\delta$ is as in Assumption \[assumption:meetingtime\]. This motivates a choice of $k$ depending on the distribution of the meeting time $\tau$. In practice, we can sample independent realizations of the meeting time and choose $k$ such that $\mathbb{P}(\tau > k)$ is small; i.e. we choose $k$ as a large quantile of the meeting times.
Secondly, as $k$ increases and for $m\geq k$, we expect $(m-k+1) \text{MSE}_{k:m}$ to converge to $\mathbb{V}[(m-k+1)^{-1/2}\sum_{t=k}^m h(X_t)]$, where $X_k$ would be distributed according to $\pi$. Denote this variance by $V_{k,m}$. The limit of $V_{k,m}$ as $m\to \infty$ is the asymptotic variance of the MCMC estimator, denoted by $V_\infty$. We do the simplifying assumption that $k$ is large enough for $(m-k+1) \text{MSE}_{k:m}$ to be approximately $V_{k,m}$. Furthermore, we approximate the cost of $H_{k:m}(X,Y)$ by the cost of $m$ calls to $P$. Dropping the third term on the right-hand side of , which is of smaller magnitude than the second term, we obtain the approximate inequality $$\mathbb{E}[2\tau + \max(1, m+1-\tau)] \mathbb{V}[H_{k:m}(X,Y)] \lessapprox \frac{m}{m-k+1} V_{k,m}
+ 2 \frac{m \sqrt{V_{k,m}C_{\delta,\beta}\delta_\beta^{k}}}{(m-k+1)^{3/2}}.$$ In order for the left-hand side to be comparable to the asymptotic variance of MCMC, we can choose $m$ such that $m / (m-k+1) \approx 1$, e.g. by defining $m$ as a large multiple of $k$. The second term on the right-hand side is negligible compared to the first as either $k$ or $m$ increases. This informal series of approximations suggests that we can retrieve an asymptotic efficiency comparable to the underlying MCMC estimators with appropriate choices of $k$ and $m$. In other words, the bias of MCMC can be removed at the cost of an increased variance, which can in turn be reduced by choosing large enough values of $k$ and $m$. Large values of $k$ and $m$ are to be traded against the desired level of parallelism: one might prefer to keep $m$ small, yielding a suboptimal efficiency for $H_{k:m}(X,Y)$, but enabling more independent copies to be generated in a given computing time.
Thus we propose to choose $k$ such that $\mathbb{P}(\tau > k)$ is small, and $m$ as a large multiple of $k$, for the asymptotic inefficiency to be comparable to that of the underlying MCMC algorithm; more precise recommendations would depend on the target, on the budget constraint and on the degree of parallelism of available hardware.
Verifying Assumption \[assumption:meetingtime\] \[subsec:Bounding-coupling-probabilities\]
------------------------------------------------------------------------------------------
We discuss how Assumption \[assumption:drift\] can be used to verify Assumption \[assumption:meetingtime\]. Informally, Assumption \[assumption:drift\] guarantees that the bivariate chain $\{(X_t,Y_{t-1}),\;t\geq 1\}$ visits $\Cset\times\Cset$ infinitely often, where $\Cset$ is a small set. Therefore, if there is a positive probability of the event $\{X_{t+1} =Y_t\}$ for every $t$ such that $(X_t,Y_{t-1})\in\Cset\times\Cset$, then we expect Assumption \[assumption:meetingtime\] to hold. The next result formalizes that intuition. The proof is based a modification of an argument by [@doucetal04]. For convenience, we introduce $\D=
\{(x,y)\in\mathcal{X}\times \mathcal{X}:\; x=y\}$. Hence Assumption \[assumption:sticktogether\] reads $\bar P((x,x),\D) =1$, for all $x\in
\mathcal{X}$.
\[prop:vgeometric\] Suppose that $P$ satisfies Assumption \[assumption:drift\] with a small set $\mathcal{C}$ of the form $\mathcal{C} = \{x: V(x)\leq L\}$ where $\lambda + \frac{b}{1+L} < 1$. Suppose also that there exists $\epsilon\in (0,1)$ such that $$\label{mino}
\inf_{x,y\in\mathcal{C}} \bar P((x,y),\D) \geq \epsilon.$$ Then there exists a finite constant $C'$, and $\kappa \in (0,1)$, such that for all $n\geq 1$, $$\mathbb{P}(\tau>n)\leq C' \pi_0(V) \kappa^n,$$ where $\pi_0(V) = \int V(x) \pi_0(dx)$. Hence Assumption \[assumption:meetingtime\] holds as long as $\pi_0(V)<\infty$.
Couplings of MCMC algorithms \[sec:Practical-couplings\]
========================================================
To compute our unbiased estimators, we must couple Markov chains in a way that satisfies Assumptions \[assumption:meetingtime\]-\[assumption:sticktogether\]. We discuss such couplings for Metropolis–Hastings, Gibbs, and Hamiltonian Monte Carlo. These couplings are widely applicable since they do not require extensive analytical knowledge of the target distribution; however they are not optimal in general, and we expect case-specific constructions to yield more efficient estimators. We begin in Section \[subsec:maximalcoupling\] by reviewing maximal couplings.
Sampling from a maximal coupling\[subsec:maximalcoupling\]
----------------------------------------------------------
The maximal coupling between two distributions $p$ and $q$ on a space $\mathcal{X}$ is the distribution of a pair of random variables $(X,Y)$ that maximizes $\mathbb{P}(X=Y)$, subject to the marginal constraints $X\sim p$ and $Y\sim q$. A procedure to sample from a maximal coupling is described in Algorithm \[alg:maximalcoupling\]. Here $\mathcal{U}([a,b])$ refers to the uniform distribution on the interval $[a,b]$ for $a<b$. We write $p$ and $q$ for both these distributions and their probability density functions with respect to a common dominating measure. Algorithm \[alg:maximalcoupling\] is well-known and described e.g. in Section 4.5 of Chapter 1 of @thorisson2000coupling; in @johnson1998coupling it is termed $\gamma$-coupling.
We justify Algorithm \[alg:maximalcoupling\] and compute its cost. Denote by $(X,Y)$ the output of the algorithm. First, $X$ follows $p$ from step 1. To prove that $Y$ follows $q$, we introduce a measurable set $A$ and check that $\mathbb{P}(Y\in A)=\int_{A}q(y)dy$. We write $\mathbb{P}(Y\in A)=\mathbb{P}(Y\in A,\text{step 1})+\mathbb{P}(Y\in A,\text{step 2})$, where the events $\{\text{step 1}\}$ and $\{\text{step 2}\}$ refer to the algorithm terminating at step 1 or 2. We compute $$\begin{aligned}
\mathbb{P}\left(Y\in A,\text{step 1}\right) & =\int_{A}\int_{0}^{+\infty}\mathds{1}\left(w\leq q(x)\right)\frac{\mathds{1}\left(0\leq w\leq p(x)\right)}{p(x)}p(x)dwdx
=\int_{A}\min(p(x),q(x))dx,\end{aligned}$$ from which we deduce that $\mathbb{P}\left(\text{step 1}\right)=\int_{\mathcal{X}}\min(p(x),q(x))dx$. For $\mathbb{P}\left(Y\in A,\text{step 2}\right)$ to be equal to $\int_{A}(q(x)-\min(p(x),q(x)))dx$, we need $$\begin{aligned}
\int_{A}(q(x)-\min(p(x),q(x)))dx
&=\mathbb{P}\left(Y\in A|\text{step 2}\right)\left(1-\int_{\mathcal{X}}\min(p(x),q(x))dx\right),\end{aligned}$$ and we conclude that the distribution of $Y$ given $\{\text{step 2}\}$ should have a density equal to $\tilde{q}(x)=(q(x)-\min(p(x),q(x)))/(1-\int\min(p(x^{\prime}),q(x^{\prime}))dx^{\prime})$ for all $x$. Step 2 is a standard rejection sampler using $q$ as a proposal distribution to target $\tilde{q}$, which concludes the proof that $Y\sim q$. We now confirm that Algorithm \[alg:maximalcoupling\] indeed maximizes the probability of $\{X = Y\}$. Under the algorithm, $$\mathbb{P}(X=Y)=\int_{\mathcal{X}}\min(p(x),q(x))dx=\frac{1}{2}\int_{\mathcal{X}}(p(x)+q(x)-|p(x)-q(x)|)dx=1-d_{\text{TV}}(p,q),$$ where $d_{\text{TV}}(p,q)=\nicefrac{1}{2}\int_{\mathcal{X}}|p(x)-q(x)|dx$ is the total variation distance. By the coupling inequality [@lindvall2002lectures], this proves that the algorithm implements a maximal coupling.
To address the cost of Algorithm \[alg:maximalcoupling\], observe that the probability of acceptance in step 2 is given by $$\begin{aligned}
\mathbb{P}(W^{\star}\geq p(Y^{\star})) &= 1-\int_{\mathcal{X}}\min\left(p(y),q(y)\right)dy.\end{aligned}$$ Step $1$ costs one draw from $p$, one evaluation from $p$ and one from $q$. Each attempt in the rejection sampler of step 2 costs one draw from $q$, one evaluation from $p$ and one from $q$. We refer to the cost of one draw and two evaluations by “one unit”, for simplicity. Then, there is a Geometric number of attempts in step 2, with mean $(1-\int_{\mathcal{X}}\min\left(p(y),q(y)\right)dy)^{-1}$, and step 2 occurs with probability $1-\int_{\mathcal{X}}\min\left(p(y),q(y)\right)dy$. Therefore the expected cost is of two units, for all distributions $p$ and $q$. To summarize, the expected cost of the algorithm does not depend on total variation distance between $p$ and $q$, and the probability of $\{X = Y\}$ is precisely one minus that distance.
1. Sample $X\sim p$ and $W|X\sim\mathcal{U}([0,p(X)]$. If $W\leq q(X)$, output $(X,X)$.
2. Otherwise, sample $Y^{\star}\sim q$ and $W^{\star}|Y^{\star}\sim\mathcal{U}([0,q(Y^{\star})])$ until $W^{\star}>p(Y^{\star})$, and output $(X,Y^{\star})$.
Alternative couplings described in @johnson1996studying [@neal1999circularly] include the following strategy, for univariate $p$ and $q$. Let $F_p^-$ and $F_q^-$ be the quantile functions associated with $p$ and $q$, and let $U$ denote a uniform random variable on $[0,1]$. Then $X=F_p^-(U)$ and $Y=F_q^-(U)$ computed with the same realization of $U$ constitute an optimal transport coupling of $p$ and $q$, also called an “increasing rearrangement” [@villani2008optimal Chapter 1]. Such couplings minimize the expected distance between $X$ and $Y$, which could be useful in the present context in combination with maximal couplings; this is left as an avenue of future research. Note that in multivariate settings, optimal transport of Normal distributions can be implemented following e.g. @knott1984optimal; however, sampling from the optimal transport between arbitrary distributions is a challenging task.
Metropolis–Hastings \[subsec:Metropolis=002013Hastings\]
--------------------------------------------------------
A couplings of MH chains due to @johnson1998coupling was described in Section \[subsec:exampleMH\]; the coupled kernel $\bar{P}((X_t,Y_{t-1}),\cdot)$ is summarized in the following procedure.
1. Sample $(X^\star,Y^\star)| (X_t,Y_{t-1})$ from a maximal coupling of $q(X_t,\cdot)$ and $q(Y_{t-1},\cdot)$.
2. Sample $U\sim \mathcal{U}([0,1])$.
3. If $U\leq \min(1,\pi(X^\star)q(X^\star,X_t)/\pi(X_t)q(X_t,X^\star))$, then $X_{t+1}=X^\star$, otherwise $X_{t+1}=X_t$.
4. If $U\leq \min(1,\pi(Y^\star)q(Y^\star,Y_{t-1})/\pi(Y_{t-1})q(Y_{t-1},Y^\star))$, then $Y_{t}=Y^\star$, otherwise $Y_{t}=Y_{t-1}$.
Here we address the verification of Assumptions \[assumption:marginaldistributions\]-\[assumption:sticktogether\]. Assumption \[assumption:marginaldistributions\] can be verified for MH chains under conditions on the target and the proposal [@nummelin:mcmc:2002; @roberts2004general]. In some settings the explicit drift function given in Theorem 3.2 of @roberts1996geometric may be used to verify Assumption \[assumption:meetingtime\] as in Section \[subsec:Bounding-coupling-probabilities\]. In certain settings, the probability of coupling at the next step given that the chains are in $X_t$ and $Y_{t-1}$ can be controlled as follows. First, the probability of proposing the same value $X^\star$ depends on the total variation distance between $q(X_{t},\cdot)$ and $q(Y_{t-1},\cdot)$, which is typically strictly positive if $X_t$ and $Y_{t-1}$ are in bounded subsets of $\mathcal{X}$. Furthermore, the probability of accepting $X^\star$ is often lower-bounded away from zero on bounded subsets of $\mathcal{X}$, for instance when $\pi(x)>0$ for all $x\in \mathcal{X}$.
In high dimension, the probability of proposing the same value $X^\star$ is low unless $X_t$ is close to $Y_{t-1}$. It might therefore be preferable to use a series of updates on low-dimensional components of the chain states as in a Metropolis-within-Gibbs strategy (see Section \[subsec:Gibbs\]), or to combine maximal couplings with optimal transport couplings mentioned in the previous section. Scalability with respect to dimension is investigated in Section \[subsec:Scaling-dimension\].
The optimal choice of proposal distribution for a single MH chain might not be optimal in the proposed coupling construction. For instance, in the case of Normal random walk proposals with variance $\Sigma$, larger variances lead to smaller total variation distances between $q(X_t,\cdot)$ and $q(Y_{t-1},\cdot)$ and thus larger coupling probabilities for the proposals. However meeting events only occur if proposals are accepted, which is unlikely if $\Sigma$ is too large. This trade-off could lead to optimal choices of $\Sigma$ that are different than the optimal choices known for the marginal chains [@roberts1997weak], which deserves further investigation.
Among extensions of the Metropolis–Hastings algorithm, Metropolis adjusted Langevin algorithms [e.g. @roberts1996exponential] are such that the proposal distribution given $X_t$ is a Normal with mean $X_t + h\nabla \log \pi(X_t)/2$ and variance $h \Sigma$, for some tuning parameter $h>0$ and covariance matrix $\Sigma$. These Normal proposal distributions could be maximally coupled as well. Another important extension consists in adapting the proposal distribution during the run of the chains [@andrieu2008tutorial; @atchade:fort:2011]; it is unclear whether such strategies could be used in the proposed framework.
Hamiltonian Monte Carlo \[subsec:Hamiltonian\]
----------------------------------------------
Hamiltonian or Hybrid Monte Carlo [HMC, @Duane:1987; @Neal:1993; @neal2011mcmc; @durmus2017convergence; @betancourtetal:2017] is a popular MCMC algorithm using gradients of the target density, in which each iteration $t$ is defined as follows. The state $X_t$ is treated as the initial position $q(0)$ of a particle under a potential energy function given by $-\log \pi$. The initial momentum $p(0)$ of the particle is drawn at random, typically from a Normal distribution [see @livingstone2017kinetic]. One can numerically approximate the solution of Hamiltonian dynamics defined by: $$\frac{d}{dt}q(t) = p(t),\quad \frac{d}{dt}p(t) = \nabla \log \pi(q(t)),$$ over a time interval $[0,T]$, where $T$ denotes the trajectory length. For instance, one might use a leap-frog integrator [@hairer:2005] with $L$ steps and a step-size of $\epsilon$ so that $T = \epsilon L$. Finally, a Metropolis–Hastings step sets $X_{t+1}$ either to $q(T)$ or $X_t$.
The use of common random numbers for the initial velocities and the uniform variables of the acceptance steps leads to pairs of chains converging to one another, under conditions on the target distribution such as strict log-concavity. This is used in @mangoubi2017rapid to quantify the mixing properties of HMC. In @heng2017unbiased, such coupled HMC steps are combined with coupled random walk MH steps that produce exact meeting times, and the verification of Assumptions \[assumption:marginaldistributions\]-\[assumption:sticktogether\] is discussed.
Gibbs sampling \[subsec:Gibbs\]
-------------------------------
Gibbs sampling consists in updating components of a Markov chain by alternately sampling from conditional distributions of the target [Chapter 10 of @robert2004monte]. In Bayesian statistics, these conditional distributions sometimes belong to a standard family such as Normal, Gamma, or Inverse Gamma. If all conditional distributions are standard, then the Markov kernel of the Gibbs sampler is itself tractable, and a maximal coupling can be implemented following Section \[subsec:maximalcoupling\]. However, in many cases at least one of the conditional updates is intractable and requires a Metropolis step. We therefore focus on maximal couplings of each conditional update, using either full conditional distributions or Metropolis updates. Controlling the probability of meeting at the next step over a set, as required for the application of Proposition \[prop:vgeometric\], can be done on a case-by-case basis. Drifts conditions for Gibbs samplers also tend to rely on case-by-case arguments [see e.g. @rosenthal1996analysis].
In generic state space models, the conditional particle filter is an MCMC algorithm targeting the distribution of the latent process given the observations. It is a Gibbs sampler on an extended state space [@andrieu:doucet:holenstein:2010]. Couplings of such Gibbs samplers are the focus of @jacob2017smoothing, where a combination of common random numbers and maximal couplings leads to pairs of chains that satisfy Assumptions \[assumption:marginaldistributions\]-\[assumption:sticktogether\].
Scaling with the dimension \[subsec:Scaling-dimension\]
-------------------------------------------------------
We compare the scaling behavior of MH, Gibbs, and HMC couplings as the dimension $d$ of the target distribution increases.
Consider a $d$-dimensional Normal target distribution $\mathcal{N}(0, V)$, where $V$ is a $d\times d$ covariance matrix with entry $(i,j)$ equal to $0.5^{|i-j|}$. In an MH algorithm on the joint target, we consider Gaussian random walks, where the covariance matrix $\Sigma$ of the proposals is set to $V/d$ (“scaling 1” in Figure \[fig:scalingdimension\]). The division by $d$ follows the recommendations of @roberts1997weak. We consider another strategy where $\Sigma$ is set to $V$ (“scaling 2”). Our second algorithm is an MH-within-Gibbs approach where each univariate component is updated with a Metropolis step using Normal proposals with unit variance. We consider performing $1$, $2$ or $5$ of such steps for each component under a systematic scan of the components. Each iteration refers to a complete scan of all components. Finally, we consider a mixture of MH and HMC kernels. At each iteration we perform an HMC step with $90\%$ probability, and otherwise we perform an MH step with a Normal proposal distribution with variance $10^{-5}$ times the identity matrix. The HMC kernel uses a leap-frog integrator with $20$ sub-steps, and we try different trajectory lengths. For all strategies, we initialize the chains from the target distribution.
For a range of values of the dimension $d$, we run the coupled chains until they meet. We present the average meeting time over $R=100$ independent repetitions in Figure \[fig:scalingdimension\] to visualize the relationship between $\mathbb{E}[\tau]$ and dimension for various MCMC algorithms. Figure \[fig:scaling:rwmh\] illustrates that the coupling of MH on the joint space fails quickly as the dimension $d$ increases. Note the logarithmic scale on the $y$-axis. We obtain worse performance with $\Sigma = V/d$ than with $\Sigma = V$, even though the marginal chains mix more quickly with $\Sigma = V/d$. The MH-within-Gibbs approach scales more favorably with dimension, as indicated in Figure \[fig:scaling:gibbs\]. Performing multiple MH steps per component further decreases the average meeting time. Finally, we present results for HMC in Figure \[fig:scaling:hmc\] for three different trajectory lengths. In this setting, the scaling of HMC with respect to the dimension is qualitatively similar to that of the Gibbs sampler and is sensitive to the choice of trajectory length.
These experiments suggest that the proposed methodology can be implemented in realistic dimensions. In particular, strategies that leverage the dependence structure of the target or gradient information can result in short meeting times even in high dimension.
Illustrations \[sec:illustrations\]
===================================
Bimodal target\[subsec:Random-Walk-Metropolis-bimodal\]
-------------------------------------------------------
We use a bimodal target distribution and a random walk Metropolis–Hastings algorithm to illustrate various aspects of the proposed method and highlight challenging situations.
We consider a mixture of univariate Normals with density $\pi(x)=0.5\cdot \mathcal{N}(x;-4,1)+0.5\cdot \mathcal{N}(x;+4,1)$, which we sample from using random walk Metropolis–Hastings with Normal proposal distributions of variance $\sigma_q^2 = 9$. This enables regular jumps between the modes of $\pi$. We set the initial distribution $\pi_0$ to $\mathcal{N}(10,10^2)$, so that chains are more likely to start near the mode at $+4$ than the mode at $-4$. Over $1,000$ independent runs, we find that the meeting time $\tau$ has an average of $20$ and a $99\%$ quantile of $105$.
We consider the task of estimating $\int \mathds{1}(x > 3)\pi(dx) \approx 0.421$. First, we consider the choice of $k$ and $m$. Over $1,000$ independent experiments, we approximate the expected cost $\mathbb{E}[2\tau + \max(1,m-\tau+1)]$, the variance $\mathbb{V}[H_{k:m}(X,Y)]$, and compute the inefficiency as the product of the two. We then divide the inefficiency by the asymptotic variance of the MCMC estimator, denoted by $V_\infty$, which we obtain from $10^6$ iterations and a burn-in period of $10^4$ using the R package CODA [@plummer2006coda].
We present the results of this test in Table \[table:mixture:easy\]. First, we see that the inefficiency is sensitive to the choice of $k$ and $m$: simply setting $k$ and $m$ to one would be highly inefficient. Secondly, we see that when $k$ and $m$ are large enough we can retrieve an inefficiency comparable to that of the underlying MCMC algorithm. A relative inefficiency close to $1$ indicates that the estimator variance is similar to that of MCMC. The ideal choice of $k$ and $m$ will depend on tradeoffs between inefficiency, the desired level of parallelism, and the number of processors available. Whether it is preferable to run coupled chains with $k=200$, $m = 2,000$ for an inefficiency of $1.3$ or $k=200$, $m=4,000$ for an inefficiency of $1.2$ is likely to depend on the context. We present a histogram of the target distribution, obtained using $k=200$, $m=4,000$, in Figure \[fig:mixture:histogram:easy\].
Next, we consider a more challenging case by setting $\sigma_q^2 = 1^2$, and we use again $\pi_0=\mathcal{N}(10,10^2)$. These values make it difficult for the chains to jump between the modes of $\pi$. Over $R=1,000$ runs we find an average meeting time of $769$, with a $99\%$ quantile of $9,186$. When the chains start in different modes, the meeting times are often dramatically larger than when the chains start by the same mode. One can still recover reasonable estimates of the target distribution, but $k$ and $m$ have to be set to larger values. With $k=20,000$ and $m=30,000$, we obtain the $95\%$ confidence interval $[0.397, 0.430]$ for $\int \mathds{1}(x > 3)\pi(dx) \approx 0.421$. We show a histogram of $\pi$ in Figure \[fig:mixture:histogram:intermediate\].
Finally, we consider a third case in which $\sigma_q^2$ is again set to one, but $\pi_0$ is set to $\mathcal{N}(10,1)$. This initialization makes it unlikely for a chain to start near the mode at $-4$. The pair of chains typically converge to the right-most mode and meet in a small number of iterations. Over $R=1,000$ replications, we find an average meeting time of $9$ and a $99\%$ quantile of $35$. A $95\%$ confidence interval on $\int \mathds{1}(x > 3)\pi(dx)$ obtained from the estimators with $k=50$, $m=500$ is $[0.799, 0.816]$, far from the true value of $0.421$. The associated histogram of $\pi$ is shown in Figure \[fig:mixture:histogram:hard\].
Sampling $9,000$ additional estimators yields a $95\%$ confidence interval $[-0.353, 1.595]$, again using $k=50$, $m=500$. Among these extra $9,000$ values, a few correspond to cases where one chain jumped to the left-most mode before meeting the other. This resulted in large meeting times, and thus in a much larger empirical variance. Upon noticing a large empirical variance one can then decide to use larger values of $k$ and $m$; the challenging situation is when the empirical variance is small even though the number of replicates is seemingly large. We conclude that although our estimators are unbiased and are consistent in the limit as $R \to \infty$, poor performance of the underlying Markov chains can still produce misleading results for any finite $R$.
Gibbs sampler for nuclear pump failure data\[subsec:Gibbs-pump-failures\]
-------------------------------------------------------------------------
Next we consider a classic Gibbs sampler for a model of pump failure counts, used e.g. in @murdoch1998exact to illustrate the implementation of perfect samplers for continuous distributions. We refer to the latter article for the case-specific calculations associated with the implementation of perfect samplers. Here we compare the proposed method with the regeneration approach of @mykland1995regeneration, which was illustrated on the same example and which was motivated by the same practical concerns: choosing the number of iterations to discard as burn-in, constructing confidence intervals, and using parallel processors.
The data consist of operating times $(t_{n})_{n=1}^{K}$ and failure counts $(s_{n})_{n=1}^{K}$ for $K=10$ pumps at the Farley-1 nuclear power station, as first described in @gaver1987robust. The model specifies $s_{n}\sim
\text{Poisson}(\lambda_{n}t_{n})$ and $\lambda_{n}\sim
\text{Gamma}(\alpha,\beta)$, where $\alpha=1.802$, $\beta \sim \text{Gamma}\left(\gamma,\delta\right)$, $\gamma=0.01$, and $\delta=1$. The Gibbs sampler for this model consists of the following update steps: $$\begin{aligned}
\lambda_n\ | \text{ rest} &\sim \text{Gamma}(\alpha+s_n, \beta + t_n) \quad \text{for }n = 1, \ldots, K, \\
\beta\ | \text{ rest} &\sim \text{Gamma}(\gamma + 10\alpha, \delta + \sum_{n=1}^K \lambda_n).\end{aligned}$$ Here the Gamma $(\alpha,\beta)$ distribution refers to the distribution with density $x\mapsto \Gamma(\alpha)^{-1} \beta^\alpha
x^{\alpha-1} \exp(-\beta x)$. We initialize all parameter values to 1. To form our estimator we apply maximal couplings at each conditional update of the Gibbs sampler, as described in Section \[subsec:Gibbs\].
We begin by drawing $1,000$ meeting times to obtain the histogram in Figure \[fig:pump:meetingtimes\]. Following the guidelines of Section \[subsec:variance\], we set $k=7$, corresponding to the $99\%$ quantile of $\tau$ and $m = 10 k = 70$. We then generate $10,000$ independent estimates for the test function $h(\lambda_1,\ldots,\lambda_K, \beta) = \beta$. Figure \[fig:pump:tuningk:efficiency\] shows the efficiency of $H_k(X,Y)$, defined as $(\mathbb{E}[\max(k,\tau)] \cdot \mathbb{V}[H_k(X,Y)])^{-1}$, for a range of $k$ values. The choice of $k=7$ appears somewhat conservative relative to the efficiency-maximizing value of $k=4$. Figure \[fig:pump:tuningm:efficiency\] shows the efficiency of $H_{4:m}(X,Y)$ as a function of $m$. The horizontal dashed line represents the inefficiency associated with $k=7$ and $m=70$, and illustrates that the efficiency obtained by following the heuristics is close to the maximum that we observe.
It is natural to compare our estimator with the regenerative approach of @mykland1995regeneration, which also provides a way of parallelizing MCMC and of constructing confidence intervals. In that paper the authors show how to use detailed knowledge of a Markov chain to construct regeneration times – random times between which the chain forms independent and identically distributed “tours”. The authors define a consistent estimator for arbitrary test functions, whose asymptotic variance takes a particularly simple form. The estimator is obtained by aggregating over these independent tours. The authors give a set of preferred tuning parameters, which we adopt for our test below.
Applying the regeneration approach to 1,000 Gibbs sampler runs of 5,000 iterations each, we observe on average 1,996 complete tours with an average length of 2.50 iterations. These values agree with the count of 1,967 tours of average length 2.56 reported in @mykland1995regeneration. We also observe a posterior mean estimate for $\beta$ of 2.47 with a variance of $1.89 \times 10^{-4}$ over the 1,000 independent runs, which implies an efficiency value of $(5,000 \cdot
1.89 \times 10^{-4})^{-1} = 1.06$. This exceeds the efficiency of $0.94$ achieved by our estimator with our heuristic choice of $k=7$ and $m=70$. On the other hand, the regeneration approach requires more extensive analytical work with the underlying Markov chain; we refer to @mykland1995regeneration for a detailed description. For reference, the underlying Gibbs sampler achieves an efficiency of $1.08$, based on a long run with $5\times 10^5$ iterations and a burn-in of $10^3$.
Variable selection \[subsec:variableselection\]
-----------------------------------------------
We consider a variable selection setting following @yang2016computational to illustrate the proposed method on a high-dimensional discrete state space.
For integers $p$ and $n$ (potentially with $p>n$) let $Y\in\mathbb{R}^n$ represent a response variable depending on covariates $X_1,\ldots,X_p\in\mathbb{R}^n$. We consider the task of inferring a binary vector $\gamma \in \{0,1\}^p$ representing which covariates to select as predictors of $Y$, with the convention that $X_i$ is selected if $\gamma_i = 1$. For any $\gamma$, we write $|\gamma| = \sum_{i=1}^p \gamma_i$ for the number of selected covariates and $X_\gamma$ for the $n \times |\gamma|$ matrix of covariates chosen by $\gamma$. Inference on $\gamma$ relies on a linear regression model relating $Y$ to $X_\gamma$, $$Y = X_\gamma \beta_\gamma + w,\quad\text{where}\quad w \sim \mathcal{N}(0,\sigma^2 I_n).$$
We assume a prior on $\gamma$ of $\pi(\gamma) \propto p^{-\kappa |\gamma|}
\mathds{1}(|\gamma| \leq s_0)$. This distribution puts mass only on vectors $\gamma$ with fewer than $s_0$ ones, imposing a degree of sparsity. Conditional on $\gamma$, we assume a Normal prior for the regression coefficient vector $\beta_\gamma\in \mathbb{R}^{|\gamma|}$ with zero mean and variance $g \sigma^{2} (X_\gamma' X_\gamma)^{-1}$. We give the precision $\sigma^{-2}$ an improper prior $\pi(\sigma^{-2}) \propto 1/\sigma^{-2}$. This leads to the marginal likelihood $$\begin{aligned}
\pi(Y | X, \gamma) & \propto \frac{(1+g)^{-|\gamma|/2}}{(1 + g(1 - R^2_\gamma))^{n/2}}, \quad
\text{where} \quad R_\gamma^2 = \frac{Y' X_\gamma (X'_\gamma X_\gamma)^{-1} X'_\gamma Y}{Y' Y}.\end{aligned}$$
To approximate the distribution $\pi(\gamma|X,Y)$, @yang2016computational rely on an MCMC algorithm whose kernel $P$ is a mixture of two Metropolis kernels. The first component $P_1(\gamma,\cdot)$ selects a coordinate $i\in \{1,\ldots,p\}$ uniformly at random and flips $\gamma_i$ to $1 - \gamma_i$. The resulting vector $\gamma^\star$ is then accepted with probability $1 \wedge \pi(\gamma^\star|X,Y) / \pi(\gamma|X,Y)$. Sampling a vector $\gamma'$ from the second kernel $P_2(\gamma, \cdot)$ proceeds as follows. If $|\gamma|$ equals zero or $p$, then $\gamma'$ is set to $\gamma$. Otherwise, coordinates $i_0$, $i_1$ are drawn uniformly among $\{j:\gamma_j = 0\}$ and $\{j:\gamma_j = 1\}$, respectively. The proposal $\gamma^\star$ is such that $\gamma^\star_{i_0} = \gamma_{i_1}$ and $\gamma^\star_{i_1} = \gamma_{i_0}$, and $\gamma^\star_j = \gamma_j$ for the other components. Then $\gamma'$ is set to $\gamma^\star$ with probability $1
\wedge \pi(\gamma^\star|X,Y) / \pi(\gamma|X,Y)$, and to $\gamma$ otherwise. Finally the MCMC kernel $P(\gamma,\cdot)$ targets $\pi(\gamma|X,Y)$ by sampling from $P_1(\gamma,\cdot)$ or from $P_2(\gamma,\cdot)$ with equal probability. Note that each MCMC iteration can only benefit from parallel processors to a limited extent, since $|\gamma|$ is always less than $s_0$, itself chosen to be a small value in most settings.
To sample a pair of states $(\gamma',\tilde{\gamma}')$ given $(\gamma,\tilde{\gamma})$, we consider the following coupled version of the MCMC algorithm described above. First, we use a common uniform random variable to decide whether to sample from a coupling of $P_1$ to itself, $\bar{P}_1$, or a coupling of $P_2$ to itself, $\bar{P}_2$. The coupled kernel $\bar{P}_1((\gamma,\tilde{\gamma}),\cdot)$ proposes flipping the same coordinate for both vectors $\gamma$ and $\tilde{\gamma}$ and then uses a common uniform random variable in the acceptance step. For the coupled kernel $\bar{P}_2((\gamma,\tilde{\gamma}),\cdot)$, we need to select two pairs of indices, $(i_0,\tilde{i}_0)$ and $(i_1,\tilde{i}_1)$. We obtain the first pair by sampling from a maximal coupling of the discrete uniform distributions on $\{j:\gamma_j = 0\}$ and $\{j: \tilde{\gamma}_j = 0\}$. This yields indices $(i_0,\tilde{i}_0)$ with the greatest possible probability that $i_0=\tilde{i}_0$. We use the same approach to sample a pair $(i_1,\tilde{i}_1)$ to maximize the probability that $i_1=\tilde{i}_1$. Finally we use a common uniform variable to accept or reject the proposals. If either vector $\gamma$ or $\tilde{\gamma}$ has no zeros or no ones, then it is kept unchanged.
We recall that one can sample from a maximal coupling of two discrete probability distributions $q = (q_1,\ldots,q_N)$ and $\tilde{q} =
(\tilde{q}_1,\ldots, \tilde{q}_N)$ as follows. First, let $c = (c_1, \dots,
c_N)$ be the distribution with probabilities $c_n = (q_n \wedge \tilde{q}_n) /
\alpha$ for $\alpha = \sum_{n=1}^N q_{n}\wedge \tilde{q}_{n}$ and define residual distributions $q'$ and $\tilde{q}'$ with probabilities $q'_n = (q_n -
\alpha c_n) / (1-\alpha)$ and $\tilde{q}'_n = (\tilde{q}_n - \alpha
c_n)/(1-\alpha)$. Then with probability $\alpha$, draw $i \sim c$ and output $(i,i)$. Otherwise draw $i \sim q'$ and $\tilde{i} \sim \tilde{q}'$ and output $(i,\tilde{i})$. The resulting pair follows a maximal coupling of $q$ and $\tilde{q}$, and the procedure involves $\mathcal{O}(N)$ operations for $N$ the size of the state space.
We now consider an experiment similar to those of @yang2016computational. We generate covariates $X_{ij}$ as independent standard Normal variables, define $$\beta^\star = \text{SNR}\sqrt{\sigma_0^2
\frac{\log(p)}{n}}(2,-3,2,2,-3,3,-2,3,-2,3,0,\ldots,0)' \in \mathbb{R}^p,$$ with $\sigma_0^2 = 1$ and $\text{SNR} = 1$, and generate $Y$ given $X$ and $\beta^\star$ from the model, with $n = 500$ and $\sigma^2 = 1$. We set $s_0 = 100$, $g = p^3$, and $\kappa =
0.1$. A sample from the initial distribution $\pi_0$ is drawn by defining a vector of $p$ zeros, sampling $s_0$ coordinates uniformly at random in $\{1,\ldots,p\}$, without replacement, and setting the corresponding entries to $1$ with probability $0.5$.
For values of $p$ between $100$ and $1000$, we run coupled chains until they meet, $R=1,000$ times independently. The meeting times divided by $p$ are shown in Figure \[fig:varselection:meetingtime\], along with dashed lines connecting deciles. From the graph we see that these scaled meeting times are approximately constant, suggesting that meeting times increase linearly with $p$. This is in line with the findings of @yang2016computational, where mixing times are shown to increase linearly with $p$.
In the case $p=1{,}000$, we choose $k=50{,}000$ and $m=100{,}000$ and generate $R = 1{,}000$ independent unbiased estimators of the inclusion probabilities $\mathbb{P}(\gamma_i = 1 | X,Y)$ for all $i\in \{1,\ldots,p\}$. We compare the estimates with an MCMC estimate obtained with $10^6$ iterations and a burn-in of $50{,}000$. The MCMC estimates for the first $20$ variables are represented by a dotted line in Figure \[fig:varselection:estimates\], and $95\%$ confidence intervals obtained from the unbiased estimators are shown as error bars. The probabilities of inclusion of the other variables are all close to zero. We observe an agreement between the proposed estimators and the MCMC estimates, considered as ground truth.
Assessing the convergence of MCMC can be difficult in high-dimensional discrete state spaces. We might not know whether the chain sticks to a small number of states because they are more probable than all other states or because the chain is stuck in a local mode. One can plot the target probabilities along the iterations of the chain and repeat over multiple chains to check that they all converge to the same range of values [see Figure 1 in @yang2016computational]. These considerations assume a different significance in the proposed framework, where confidence intervals are constructed using the Central Limit Theorem for independent variables. We note that the results of this section are sensitive to the prior parameter $\kappa$ and on the degree of correlations between the columns of $X$; we refer to @yang2016computational for a further discussion, and to @johnson2004 for a similar Bayesian variable selection setting where couplings of chains are used to assess convergence.
Cut distribution \[subsec:cut-distribution\]
--------------------------------------------
Our proposed estimator can be used to approximate the cut distribution, which poses a significant challenge for existing MCMC methods [@plummer2014cuts; @jacob2017modularization].
Consider two models, one with parameters $\theta_{1}$ and data $Y_{1}$ and another with parameters $\theta_{2}$ and data $Y_{2}$, where we might allow the likelihood $Y_2$ to depend on both $\theta_{1}$ and $\theta_{2}$. For instance the first model could be a regression with data $Y_1$ and coefficients $\theta_1$, and the second model could be another regression whose covariates are the residuals, coefficients, or fitted values of the first regression [@pagan1984econometric; @murphy2002estimation]. In principle one could introduce an encompassing model and conduct joint inference on $\theta_{1}$ and $\theta_{2}$ via the posterior distribution. However, misspecification of either model would then lead to misspecification of the ensemble and misleading quantification of uncertainties, as noted in several studies [e.g. @liu2009; @plummer2014cuts; @lunn2009combining; @mccandless2010cutting; @zigler2016central; @blangiardo2011Bayesian].
The cut distribution [@spiegelhalter2003winbugs; @plummer2014cuts] allows the propagation of the uncertainty about $\theta_1$ to inference on $\theta_2$ while preventing misspecification in the second model from affecting estimation in the first. The cut distribution is defined as $$\pi_{\text{cut}}\left(\theta_{1},\theta_{2}\right)=\pi_{1}(\theta_{1})\pi_{2}(\theta_{2}|\theta_{1}),$$ where $\pi_{1}(\theta_{1})$ refers to the distribution of $\theta_1$ given $Y_1$ in the first model alone, and $\pi_{2}(\theta_{2}|\theta_{1})$ refers to the distribution of $\theta_{2}$ given $Y_{2}$ and $\theta_{1}$ in the second model. Often, the density $\pi_{2}(\theta_{2}|\theta_{1})$ can only evaluated up to a constant in $\theta_2$, which may vary as a function of $\theta_1$. This makes the cut distribution intractable for standard MCMC algorithms [@plummer2014cuts]. However the cut distribution has been used in practice, as in the references above, and can be justified with decision-theoretic arguments [@jacob2017modularization].
A naive approach consists of first running an MCMC algorithm targeting $\pi_{1}(\theta_{1})$, obtaining a sample $(\theta_{1}^{n})_{n=1}^{N_{1}}$, perhaps after discarding a burn-in period and thinning the chain. Then, for each $\theta_{1}^{n}$, one can run an MCMC algorithm targeting $\pi_{2}(\theta_{2}|\theta_{1}^{n})$, yielding $N_{2}$ samples. One might again discard some burn-in and thin the chains, or just keep the final state of each chain. The resulting joint samples approximate the cut distribution. However, the validity of this approach relies on a double limit in $N_{1}$ and $N_{2}$. Diagnosing convergence may also be difficult given the number of chains in the second stage, each of which targets a different distribution $\pi_2(\theta_2|\theta_1^n)$.
If one could sample $\theta_1 \sim \pi_1$ and $\theta_2 |\theta_1 \sim \pi_2(\cdot |\theta_1)$, then the pair $(\theta_1,\theta_2)$ would follow the cut distribution. The same two-stage rationale can be applied in the proposed framework. Consider a test function $(\theta_{1},\theta_{2})\mapsto h(\theta_{1},\theta_{2})$. Writing $\mathbb{E}_\text{cut}$ for expectations with respect to $\pi_{\text{cut}}$, the law of iterated expectations yields $$\begin{aligned}
\mathbb{E}_\text{cut}[h(\theta_1,\theta_2)] & =\int\left(\int h(\theta_{1},\theta_{2})\pi_{2}(d\theta_{2}|\theta_{1})\right)\pi_{1}(d\theta_{1})
=\int\bar{h}(\theta_{1})\pi_{1}(d\theta_{1}).\end{aligned}$$ Here $\bar{h}(\theta_{1})=\int
h(\theta_{1},\theta_{2})\pi_{2}(d\theta_{2}|\theta_{1})$. In the proposed framework, one can make an unbiased estimator of $\bar{h}(\theta_1)$ for all $\theta_1$, and plug the estimators into an unbiased estimator of $\int
h(\theta_1)\pi_1(d\theta_1)$. This is even clearer using the signed measure representation of Section \[sec:signedmeasure\]: one can obtain a signed measure $\hat{\pi}_1(\cdot) = \sum_{i=1}^N \omega_\ell
\delta_{\theta_{1,\ell}}(\cdot)$ approximating $\pi_1$, and then obtain an unbiased estimator of $\bar{h}(\theta_{1,\ell})$ for all $\ell$, denoted by $\bar{H}_\ell$. Then the weighted average $\sum_{i=1}^N \omega_\ell
\bar{H}_\ell$ is an unbiased estimator of $\mathbb{E}_{\text{cut}}[h(\theta_1,\theta_2)]$ by the law of iterated expectations. Such estimators can be generated independently in parallel, and their average provides a consistent approximation of an expectation with respect to the cut distribution.
We consider the example described in @plummer2014cuts, inspired by an investigation of the international correlation between human papillomavirus (HPV) prevalence and cervical cancer incidence @maucort2008international. The first module concerns HPV prevalence, with data independently collected in $13$ countries. The parameter $\theta_{1}=(\theta_{1,1},\ldots,\theta_{1,13})$ receives a Beta$(1,1)$ prior distribution independently for each component. The data $(Y_{1},\ldots,Y_{13})$ consist of $13$ pairs of integers. The first represents the number of women infected with high-risk HPV, and the second represents population sizes. The likelihood specifies a Binomial model for $Y_{i}$, independently for each component $i$. The first posterior is given by a product of Beta distributions.
The second module concerns the relationship between HPV prevalence and cancer incidence, and posits a Poisson regression. The parameters are $\theta_{2}=(\theta_{2,1},\theta_{2,2})$ and receive a Normal prior with zero mean and variance $10^{3}$. The likelihood in this module is given by $$Z_{1,i} \sim\text{Poisson}(\exp\left(\theta_{2,1}+\theta_{1,i}\theta_{2,2}+Z_{2,i}\right))
\quad \text{for } i\in\left\{ 1,\ldots,13\right\},$$ where the data $(Z_{1,i},Z_{2,i})_{i=1}^{13}$ are pairs of integers. The first component represents numbers of cancer cases, while the second is a number of woman-years of follow-up. The Poisson regression model might be misspecified, which motivates departures from the joint model [@plummer2014cuts].
In our setting, we can draw directly from the first posterior, denoted by $\pi_1(\theta_1)$. We can thus obtain a sample $(\theta_{1}^{n})_{n=1}^{N}$ approximating $\pi_{1}(\theta_{1})$. For each $\theta_{1}^{n}$, we consider a Metropolis–Hastings algorithm targeting $\pi_{2}(\theta_{2}|\theta_{1}^{n})$, using a Normal random walk proposal with variance $\Sigma$. We set the initial distribution $\pi_0$ to be standard Normal, and the variance $\Sigma$ to be unit diagonal matrix. From $1,000$ coupled chains, we observe a $95\%$ quantile of the meeting times at $381$. We set $k = 381$ and $m=5k$, and estimate the posterior mean and variance using $R=1,000$ independent estimators. We use these estimated moments to propose a new Normal distribution for $\pi_0$, and we set $\Sigma$ to the estimated posterior variance. We run $R=10,000$ coupled chains with these new settings, and observe the meeting times shown in the histogram of Figure \[fig:plummer:meetingtime\]. The average meeting time is $15$, with a maximum of $110$. Applying the $95\%$ quantile heuristic, we set $k=49$ and $m=10k$. The two histograms of Figures \[fig:plummer:histogramcomponent1\] and \[fig:plummer:histogramcomponent2\] show that the marginal distributions are approximated accurately. The overlaid red curves indicate the “ground truth”, obtained by running $1,000$ steps of MCMC targeting $\pi_2(\theta_2|\theta_1)$ for each of $10,000$ independent draws of $\theta_{1}$ from $\pi_1$ and keeping the final state of each chain.
Discussion \[sec:discussion\]
=============================
With the telescopic sum argument of @glynn2014exact and couplings of MCMC algorithms, we have introduced unbiased estimators of integrals with respect to the target distribution, with a variance that can be controlled with tuning parameters ($k$ and $m$). In practice, we propose to choose $k$ as a large quantile of the meeting times. Then $m$ can be set as a large multiple of $k$. Improving on these simple guidelines stands as a subject for future research. In numerical experiments we have argued that the proposed estimators yield a practical way of parallelizing MCMC computations, applicable to a wide range of settings; the supplementary materials contain further experiments. We stress that coupling pairs of Markov chains does not improve their marginal mixing properties, and that misleading confidence intervals are obtained for any fixed computational budget if the initial distribution and the MCMC kernels are poorly designed, as in Section \[subsec:Random-Walk-Metropolis-bimodal\]. Misleading results would also be obtained if an impatient user stops the coupled chains before they meet.
We have considered different couplings of MCMC algorithms, based on maximal couplings and common random numbers. These allow us to compute unbiased estimators without requiring further analytical knowledge about the target distribution or about the MCMC kernels. More sophisticated couplings are also possible, and include optimal transport couplings to bring two chains together quickly and, subsequently, facilitate exact meeting using a maximal coupling.
Regarding convergence diagnostics, the proposed framework yields the following representation for the total variation between $\pi_k$ and $\pi$, where $\pi_k$ denotes the marginal distribution of $X_k$: $$\begin{aligned}
d_{\text{TV}}(\pi_k,\pi) & = & \frac{1}{2} \sup_{h:\; |h|\leq 1} \left| \mathbb{E}[h(X_k)] - \mathbb{E}_{\pi}[h(X)]\right|\\
& =& \frac{1}{2}\sup_{h:\;|h|\leq 1} \left| \mathbb{E}\left[\sum_{t=k+1}^{\tau-1}\left(h(X_t)-h(Y_{t-1})\right) \right]\right|,\end{aligned}$$ Here the supremum ranges over all bounded measurable functions under the stated assumptions. The equality above has several implications. For instance, by the triangle inequality we obtain $d_{\text{TV}}(\pi_k,\pi) \leq \min(1,
\mathbb{E}[\max(0, (\tau - k + 1))])$, and we can approximate $\mathbb{E}[\max(0, (\tau - k + 1))]$ by Monte Carlo for a range of $k$ values. On the other hand, considering any particular function $h$ yields a lower bound on $d_{\text{TV}}(\pi_k,\pi)$.
Thanks to the potential for complete parallelization, the proposed framework can facilitate consideration of MCMC kernels that might be too expensive for serial implementation. For instance, one can improve MH-within-Gibbs samplers by performing more MH steps per component update (as in Section \[subsec:Scaling-dimension\]), HMC by using smaller step-sizes in the integrator [@heng2017unbiased], and particle MCMC by using more particles at each step [@andrieu:doucet:holenstein:2010; @jacob2017smoothing]. We expect the optimal tuning of MCMC kernels to be differ in the proposed framework than when used marginally.
There are also settings in which the unbiased property itself might be appealing. For instance, if MCMC is used to approximate a gradient in a stochastic gradient algorithm, or an expectation in an Expectation-Maximization algorithm, the unbiasedness property could have benefits for the optimizer [see e.g. @doucet2017asymptotic]. Moreover, in the context of Bayesian inference with misspecified models made of multiple components [@liu2009; @jacob2017modularization], we have demonstrated that the unbiased property leads to convenient approximations of the cut distribution [@plummer2014cuts].
The design of generic and efficient MCMC kernels is a topic of active ongoing research [e.g. @andrieu2009pseudo; @murray2010elliptical; @goodman2010ensemble; @pollock2016scalable; @vanetti2017piecewise; @titsias2017hamming and references therein]. These kernels can be used in the proposed framework as long as appropriate couplings can be found. The particular case of conditional particle filter is explored in @jacob2017smoothing and the case of Hamiltonian Monte Carlo in @heng2017unbiased; both require specific couplings and associated arguments to establish the validity of the resulting estimators.
### Acknowledgement. {#acknowledgement. .unnumbered}
The authors are grateful to Jeremy Heng and Luc Vincent-Genod for useful discussions. The authors gratefully acknowledge support by the National Science Foundation through grants DMS-1712872 (Pierre E. Jacob), and DMS-1513040 (Yves F. Atchadé).
Proofs
======
Proof of Proposition \[prop:unbiased\]
--------------------------------------
We present the proof for $H_0(X,Y)$, a similar reasoning holds for $H_k(X,Y)$. We follow the same arguments as in @glynn2014exact [@vihola2015unbiased]. To study $H_{0}(X,Y)=h(X_{0})+\sum_{t=1}^{\tau-1} (h(X_{t})-h(Y_{t-1}))$, we introduce $\Delta_{0}=h\left(X_{0}\right)$ and $\Delta_{t}=h\left(X_{t}\right)-h\left(Y_{t-1}\right)$ for all $t\ge1$, and define $H_{0}^{n}(X,Y)=\sum_{t=0}^{n}\Delta_{t}$. For simplicity we assume that $\Delta_{t}\in\mathbb{R}$, which corresponds to studying the component-wise behavior of $H_{0}(X,Y)$.
We have $\mathbb{E}[\tau]<\infty$ from Assumption \[assumption:meetingtime\], so that the computing time of $H_0(X,Y)$ has a finite expectation. Together with Assumption \[assumption:sticktogether\], this implies that $H_{0}^{n}(X,Y)\to H_{0}(X,Y)$ almost surely as $n\to\infty$. We now show that $H_{0}^{n}(X,Y)$ is a Cauchy sequence in $L_{2}$, the complete space of random variables with finite two moments, that is, $\sup_{n^{\prime}\geq n}\mathbb{E}[(H_{0}^{n^{\prime}}(X,Y)-H_{0}^{n}(X,Y))^{2}]\to0$ as $n\to\infty$. For $n^{\prime}\geq n$, we compute $$\mathbb{E}[(H_{0}^{n^{\prime}}(X,Y)-H_{0}^{n}(X,Y))^{2}]=\sum_{s=n+1}^{n^{\prime}}\sum_{t=n+1}^{n^{\prime}}\mathbb{E}[\Delta_{s}\Delta_{t}],$$ and consider each term $\mathbb{E}[\Delta_{s}\Delta_{t}]$ for $(s,t)\in\{n+1,\ldots,n^{\prime}\}^{2}$. The Cauchy-Schwarz inequality implies $\mathbb{E}[\Delta_{s}\Delta_{t}]\leq\left(\mathbb{E}[\Delta_{s}^{2}]\cdot\mathbb{E}[\Delta_{t}^{2}]\right)^{1/2}$, and noting that $\mathbb{E}[\Delta_{t}^{2}]=\mathbb{E}[\Delta_{t}^{2}\cdot\mathds{1}(\tau>t)]$, we can apply Hölder’s inequality with $p=1+\eta/2$, $q=(2+\eta)/\eta$ for any $\eta>0$ to obtain
$$\mathbb{E}[\Delta_{t}^{2}\cdot\mathds{1}(\tau>t)]\leq\mathbb{E}[|\Delta_{t}|^{2+\eta}]^{\nicefrac{1}{(1+\eta/2)}}\mathbb{\ \cdot\ E}[\mathds{1}(\tau>t)]^{\nicefrac{\eta}{(2+\eta)}}\leq\mathbb{E}[|\Delta_{t}|^{2+\eta}]^{\nicefrac{1}{(1+\eta/2)}}\cdot\left(C\delta^{t}\right)^{\nicefrac{\eta}{(2+\eta)}}.$$ Here we have used Assumption \[assumption:meetingtime\] to bound $\mathbb{E}[\mathds{1}(\tau>t)]$. We can also use Assumption \[assumption:marginaldistributions\] together with Minkowski’s inequality to bound $\mathbb{E}[|\Delta_{t}|^{2+\eta}]^{\nicefrac{1}{(1+\eta/2)}}$ by a constant $\tilde{C}$, for all $t\geq 0$. Defining $\tilde{\delta}=\delta^{\nicefrac{\eta}{(2+\eta)}}\in(0,1)$ then gives the bound $\mathbb{E}[\Delta_{t}^{2}]\leq\tilde{C}\tilde{\delta}^{t}$ for all $t\geq 0$. This implies $\mathbb{E}[(H_{0}^{n^{\prime}}(X,Y)-H_{0}^{n}(X,Y))^{2}]\leq\bar{C}\tilde{\delta}^{n}$ for some other constant $\bar{C}$, and thus $(H_{0}^{n}(X,Y))$ is Cauchy in $L_{2}$. This proves that its limit $H_{0}(X,Y)$ has finite first and second moments. Assumption \[assumption:marginaldistributions\] implies that $\lim_{n\to\infty}\mathbb{E}[H_{0}^{n}(X,Y)]=\mathbb{E}_\pi[h(X)]$, by a telescopic sum argument, so we conclude that $\mathbb{E}[H_{0}(X,Y)]=\mathbb{E}_\pi[h(X)]$. We can also obtain an explicit representation of $\mathbb{E}[H_0(X,Y)^2]$ as the limit of $\mathbb{E}[H_{0}^{n}(X,Y)^{2}]$ when $n\to\infty$.
Proof of Proposition \[prop:glivenko\]
--------------------------------------
We adopt a similar strategy to that of @glynn2014exact, Section 4. For the $H_k(X,Y)$ case, the unbiased estimator of $F(s) = \mathbb{P}_\pi(X \leq
s)$ over $R$ samples is of the form $$\begin{aligned}
\hat{F}^R(s) & = \frac{1}{R} \sum_{r=1}^R \left(
\mathds{1}(X_k^{(r)} \leq s) + \sum_{\ell=k+1}^{\tau^{(r)}} (\mathds{1}(X_\ell^{(r)} \leq s) - \mathds{1}(Y_{\ell-1}^{(r)} \leq s) )
\right).
\end{aligned}$$ Here $\tau^{(r)}$ denotes the meeting time of the $r$-th independent run. We want to show that as $R \to \infty$, ${\sup_s |F(s) - \hat{F}^R(s)| \xrightarrow{a.s.} 0}$. Define $G_{X,k}^R(s) = R^{-1} \sum_{r=1}^R
\mathds{1}(X_k^{(r)} \leq s)$. For $\ell > k$, define $$G_{X,\ell}^R(s) = \frac{1}{R} \sum_{r=1}^R \mathds{1}(X_\ell^{(r)} \leq s) \cdot \mathds{1}(\ell \leq \tau^{(r)}),
\qquad
G_{Y,\ell}^R(s) = \frac{1}{R} \sum_{r=1}^R \mathds{1}(Y_{\ell-1}^{(r)} \leq s) \cdot \mathds{1}(\ell \leq \tau^{(r)}).$$ Then $\hat{F}^R(s) = G_{X,k}^R(s) + \sum_{\ell=k+1}^\infty (G_{X,\ell}^R(s) - G_{Y,\ell}^R(s))$. By the standard Glivenko-Cantelli theorem, as ${\mathbb{R}}\to \infty$ we have $$\sup_s \left| G_{X,k}^R(s) - \mathbb{P}(X_k \leq s) \right| \xrightarrow{a.s.} 0,
\qquad
\sup_s \left| G_{X,\ell}^R (s) - \mathbb{E}[\mathds{1}(X_\ell \leq s) \cdot \mathds{1}(\ell \leq \tau)] \right| \xrightarrow{a.s.} 0,$$ $$\sup_s \left| G_{Y, \ell}^R (s) - \mathbb{E}[\mathds{1}(Y_{\ell-1} \leq s) \cdot \mathds{1}(\ell \leq \tau)] \right| \xrightarrow{a.s.} 0,$$ for each $\ell > k$. Next, we observe that, for all $s,\ell$, $$\mathbb{E}[( \mathds{1}(X_\ell \leq s) - \mathds{1}(Y_{\ell-1} \leq s)) \cdot \mathds{1}(\tau \geq \ell)] = \mathbb{P}(X_\ell \leq s) - \mathbb{P}(X_{\ell-1} \leq s).$$ This holds for a simple reason in our setting. For any $h(\cdot)$ and any $\ell$, $$\begin{aligned}
\mathbb{E}[h(X_\ell) - h(Y_{\ell-1})]
& = \mathbb{E}[(h(X_\ell) - h(Y_{\ell-1}))\mathds{1}(\tau>\ell)] + \mathbb{E}[(h(X_\ell) - h(Y_{\ell-1}))\mathds{1}(\tau \leq \ell)] \\
& = \mathbb{E}[(h(X_\ell) - h(Y_{\ell-1}))\mathds{1}(\tau>\ell)]
\end{aligned}$$ since if $\tau \leq \ell$ then $X_\ell = Y_{\ell-1}$ by Assumption \[assumption:sticktogether\]. Applying this result with $h(\cdot) = \mathds{1}(\cdot \leq s)$ yields the desired statement.
The above implies that for any finite $i \geq k$ we have $$\begin{aligned}
& \left|
G_{X,k}^R(s) - \mathbb{P}(X_k \leq s) +
\sum_{\ell=k+1}^i \Big( G_{X,\ell}^R(s) - G_{Y,\ell}^R(s)
- \mathbb{E}[(\mathds{1}(X_\ell \leq s) - \mathds{1}(Y_{\ell-1} \leq s) ) \cdot \mathds{1}(\ell \leq \tau)] \Big) \right| \\
& = \left| G_{X,k}^R(s) - \mathbb{P}(X_k \leq s) + \sum_{\ell=k+1}^i \Big( G_{X, \ell}^R (s) - G_{Y,\ell}^R(s)
- (\mathbb{P}(X_\ell \leq s) - \mathbb{P}(X_{\ell-1} \leq s)) \Big) \right| \\
&= \left|
\Big(G_{X,k}^R(s) + \sum_{\ell=k+1}^i (G_{X,\ell}^R(s) - G_{Y,\ell}^R(s))\Big) - \mathbb{P}(X_i \leq s) \right|
\end{aligned}$$ Hence $$\sup_s \left|
\Big(G_{X,k}^R(s) + \sum_{\ell=k+1}^i (G_{X, \ell}^R(s) - G_{Y, \ell}^R(s))\Big) - \mathbb{P}(X_i \leq s) \right| \to 0.$$ We have assumed that $(X_t)_{t\geq0}$ converges to $\pi$ in total variation, which implies $\sup_s | \mathbb{P}(X_i \leq s) - F(s)| \to 0$ as $i\to\infty$. Also, we note that for all $s$, $$\left| \sum_{\ell > i} G_{X, \ell}^R(s) - G_{Y, \ell}^R(s) \right|
\leq
\sum_{\ell > i} \frac{1}{R} \sum_{r=1}^R \mathds{1}(\ell \leq \tau^{(r)}) \to
\sum_{\ell > i} \mathbb{P}(\ell \leq \tau)$$ almost surely by the strong law of large numbers. Assumption \[assumption:meetingtime\] implies that this quantity goes to 0 as $i \to \infty$.
Combining these observations with the result obtained for finite $i$, we conclude that $\sup_s |\hat{F}^R(s) - F(s) | \to 0$ as $R \to \infty$, almost surely. The reasoning holds for the function $\hat{F}^{R}$ corresponding to $H_{k:m}(X,Y)$ instead of $H_k(X,Y)$; such a function is simply the average of a finite number of functions associated with $H_\ell(X,Y)$ for $\ell \in \{k,\ldots,m\}$.
Proof of Proposition \[prop1\]
------------------------------
Throughout the proof $C$ denotes a generic finite constant whose actual value may change from one appearance to the next. We will use the usual Markov chain notation. In particular if $f:\;\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ is a measurable function then $\bar P [f](x,y) :=\int_{\mathcal{X}\times\mathcal{X}}
\bar P((x,y),dz) f(z)$. Note that from the construction of $\bar P$, if $f$ is depends only on $x$ (resp. $y$), that is $f(x,y) = f(x)$ (resp. $f(x,y)=f(y)$), then $\bar P [f](x,y) = Pf(x)$ (resp. $\bar P[f](x,y) = Pf(y)$).
For $k\geq 1$, we consider the general problem of bounding $\mathbb{E}[S_{k}^2]$, where $S_{k}$ is of the form $$S_{k} = \mathds{1}(\tau>k) \sum_{t=k}^{\tau-1} b_t\left(h(X_{t})-h(Y_{t-1})\right),$$ for some arbitrary bounded sequence $(b_t)_{t\geq 0}$. Fix an integer $N\geq k$, and set $$S_k^{(N)} = \mathds{1}(\tau>k) \sum_{t=k}^{N\wedge\tau-1} b_t\left(h(X_{t})-h(Y_{t-1})\right).$$ The same argument as in the proof of Proposition \[prop:unbiased\] can be applied here and shows that $S_k^{(N)}\in L^2$, is a Cauchy sequence in $L_2$ that converges to $S_k$, as $N\to\infty$, so that $$\mathbb{E}[S_k^2] = \lim_{N\to\infty} \mathbb{E}\left[(S_k^{(N)})^2\right].$$ Since $|h|_{V^\beta}:=\sup_x |h(x)|/V^\beta(x)<\infty$, and under Assumption \[assumption:drift\], there exists a measurable function $g:\;\mathcal{X} \to\mathbb{R}$ such that $|g|_{V^\beta}<\infty$, and $g-Pg=h-\pi(h)$. To see this, first note that the drift condition (\[assumption:drift\]) implies that for any $\beta\in(0,1/2)$, we have $PV^\beta(x) \leq \lambda^\beta V^\beta(x) + b^\beta\mathds{1}(x\in \mathcal{C})$, for all $x\in\mathcal{X}$. Indeed by Jensen’s inequality $PV^\beta(x) \leq (PV(x))^\beta \leq (\lambda V(x) + b\mathds{1}(x\in \mathcal{C}))^\beta \leq \lambda^\beta V^\beta(x) + b^\beta \mathds{1}(x\in \mathcal{C})$, using the fact that for all $x,y\geq 0$ and $\alpha\in[0,1]$, $(x+y)^\alpha \leq x^\alpha +y^\alpha$. The drift condition in $V^\beta$ together with the fact that $P$ is $\phi$-irreducible and aperiodic implies that there exists $\rho_\beta\in (0,1)$, $C_\beta<\infty$ such that for all $x\in\mathcal{X}$, $n\geq 0$, $$\label{geo:rate}
\|P^n(x,\cdot)-\pi\|_{V^\beta} \leq C_\beta V^\beta(x) \rho_\beta^n,$$ where for a function $W:\;\Xset\to [1,\infty)$, the $W$-norm between two probability measures $\mu,\nu$ is defined as $$\|\mu-\nu\|_W : = \sup_{f\textsf{ meas.}:\;|f|_W\leq 1} \;|\mu(f)-\nu(f)|,$$ and $|f|_W : = \sup_x |f(x)|/W(x)$. This result can be found in Theorem 15.0.1 of [@meyn:tweedie:1993]. It follows from (\[geo:rate\]) that $\sum_{j\geq 0} |P^j(h-\pi(h))(x)| \leq |h|_{V^\beta}\sum_{j\geq 0} \|P^j(x,\cdot) - \pi\|_{V^\beta} \leq \frac{C_\beta |h|_{V^\beta} V^\beta(x)}{1-\rho_\beta}<\infty$. Hence the function $$g(x) = \sum_{j\geq 0} P^j(h-\pi(h))(x),\;\;\;x\in\mathcal{X},$$ is well-defined and measurable (as a limit of a sequence of measurable functions) and satisfies $|g(x)| \leq C_\beta |h|_{V^\beta} V^\beta(x)/ (1-\rho_\beta)$. And since $PV^\beta$ is finite everywhere, by Lebesgue’s dominated convergence we deduce that $Pg$ is finite everywhere as well and $$Pg(x) = \int g(y) P(x,dy) = \sum_{j\geq 0} \int (P^j(h-\pi(h))(y))P(x,dy) = \sum_{j\geq 1} P^j(h-\pi(h))(x).$$ Hence $g - Pg = h- \pi(h)$, as claimed.
Hence, with $Z_t := (X_{t},Y_{t-1})$, and $\bar g(x,y) = g(x)-g(y)$, we have $$h(X_t)-h(Y_{t-1}) = \bar g(Z_t) - \bar P [\bar g](Z_t).$$ Using this and a telescoping sum argument, we write $$\begin{aligned}
S_k^{(N)} &=& \sum_{t=k}^{N-1} b_{t}\left(h(X_t)-h(Y_{t-1})\right)\mathds{1}(\tau> t)\\
& = & \sum_{t=k}^{N-1}b_t\left(\bar g(Z_t) - \bar P[\bar g](Z_t)\right)\mathds{1}(\tau> t)\\
& = & \sum_{t=k}^{N-1} b_{t}\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)\mathds{1}(\tau> t) \\
&&+ \sum_{t=k}^{N-1} \left(b_{t}\bar g(Z_t)\mathds{1}(\tau> t) - b_{t+1}\bar g(Z_{t+1})\mathds{1}(\tau> t+1)\right)\\
&& + \sum_{t=k}^{N-1} \left(b_{t+1} \mathds{1}(\tau> t+1) - b_t\mathds{1}(\tau> t)\right) \bar g(Z_{t+1}).\end{aligned}$$ Since $\bar g(Z_{t+1}) = 0$ on $\{\tau=t+1\}$, the last term in the above display reduces to $\sum_{t=k}^{N-1} (b_{t+1}-b_t) \bar
g(Z_{t+1})\mathds{1}(\tau> t+1)$, and we obtain $$\begin{gathered}
\label{eq:prop31:S2}
S_k^{(N)} = \sum_{t=k}^{N-1} b_{t}\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)\mathds{1}(\tau> t) \\
+ b_k\bar g(Z_k)\mathds{1}(\tau> k) - b_N\bar g(Z_N)\mathds{1}(\tau> N) + \sum_{t=k}^{N-1} (b_{t+1}-b_t) \bar g(Z_{t+1})\mathds{1}(\tau> t+1).\end{gathered}$$ Let $\mathcal{F}_t$ denote the sigma-algebra generated by the variables $X_0,(X_1,Y_0),\ldots,(X_t,Y_{t-1})$. Note that $\{\tau>t\}$ belongs to $\mathcal{F}_t$. Hence $$\begin{aligned}
\mathbb{E}\left[b_{t}\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)\mathds{1}(\tau> t)\vert \F_{t}\right]
&=& b_t \mathds{1}(\tau> t)\mathbb{E}\left[\bar g(Z_{t+1}) - \bar P[ \bar g](Z_t)\vert \F_{t}\right] \\
& = & b_t \mathds{1}(\tau> t)\left(\bar P[\bar g](Z_t) - \bar P [\bar g](Z_t) \right)
= 0.\end{aligned}$$ In other words, $\left\{\left(\sum_{t=k}^{j} b_{t}\left(\bar g(Z_{t+1}) - \bar
P [\bar g](Z_t)\right)\mathds{1}(\tau> t),\mathcal{F}_j\right),\;k\leq j\leq
N-1\right\}$ is a martingale. The orthogonality of the martingale increments gives $$\mathbb{E}\left[\left(\sum_{t=k}^{N-1} b_{t}\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)\mathds{1}(\tau> t)\right)^2\right] = \sum_{t=k}^{N-1} b_t^2\mathbb{E}\left[\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)^2\mathds{1}(\tau> t)\right].$$ We use this together with (\[eq:prop31:S2\]), the convexity of the squared norm, and Minkowski’s inequality to conclude that $$\begin{gathered}
\label{proof:propvar:eq2}
\mathbb{E}\left[(S_k^{(N)})^2\right] \leq 4\sum_{t=k}^{N-1}
b_t^2\mathbb{E}\left[\left(\bar g(Z_{t+1}) - \bar P [\bar
g](Z_t)\right)^2\mathds{1}(\tau> t)\right] + 4b_k^2\mathbb{E}\left[\bar
g^2(Z_k)\mathds{1}(\tau> k)\right]\\
+4b_N^2\mathbb{E}\left[\bar g^2(Z_N)\mathds{1}(\tau> N)\right] +4\left[\sum_{t=k}^{N-1} |b_{t+1}-b_t|\mathbb{E}^{1/2}\left( \bar g^2(Z_{t+1})\mathds{1}(\tau> t+1)\right)\right]^2.\end{gathered}$$ Assumption \[assumption:drift\] together with $\pi_0(V)<\infty$, implies that $$\label{eq:control:moment}
\sup_{n\geq 0}\mathbb{E}[V(X_n)]\leq C,$$ for some finite constant $C$. Indeed, $\mathbb{E}[V(X_n)] = \int \pi_0( dx) P^n V(x)$, and a repeated application of the drift condition (\[assumption:drift\]) implies that $P^n V(x) \leq \lambda^n V(x) + \frac{b}{1-\lambda}$, for all $x\in\mathcal{X}$. For any $t\geq 0$, and for $1<p=\frac{1}{2\beta}$, and $\frac{1}{p}+\frac{1}{q}=1$, we have $$\begin{aligned}
\mathbb{E}\left[\left(V^\beta(X_t) + V^\beta(Y_{t-1})\right)^2\mathds{1}(\tau>t)\right] & \leq & \mathbb{E}^{1/p}\left[\left(V^\beta(X_t) + V^\beta(Y_{t-1})\right)^{2p}\right]\mathbb{P}^{1/q}(\tau>t)\;\;\mbox{(H\"older)}\\
& \leq & \left\{\mathbb{E}^{1/(2p)}\left[V^{2p\beta}(X_t)\right] + \mathbb{E}^{1/(2p)}\left[V^{2p\beta}(Y_{t-1})\right]\right\}^2 \\
&& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\hspace{3cm}\times \mathbb{P}^{1/q}(\tau>t)\;\;\;\mbox{(Minkowski)}\\
& \leq & C\mathbb{P}^{1/q}(\tau>t)\hspace{4.8cm}(\mbox{by \eqref{eq:control:moment}})\\
& \leq & C\delta_\beta^{t} \hspace{6cm}(\mbox{by Assumption }\ref{assumption:meetingtime}),\end{aligned}$$ where $\delta_\beta = \delta^{1/q}$ with $\delta$ as in Assumption \[assumption:meetingtime\]. Note that all the expectations on the right hand side of (\[proof:propvar:eq2\]) are of the form $\mathbb{E}\left[ \bar g^2(Z_{t})\mathds{1}(\tau> t)\right]$, except for the term $\mathbb{E}\left[\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)^2\mathds{1}(\tau> t)\right]$. However by the martingale difference property, $\mathbb{E}\left[\bar P [\bar g](Z_t)\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)\mathds{1}(\tau> t)\right] = 0$, so that for all $t\geq k$, $$\begin{aligned}
\mathbb{E}\left[\left(\bar g(Z_{t+1}) - \bar P [\bar g](Z_t)\right)^2\mathds{1}(\tau> t)\right] &= &\mathbb{E}\left[\mathds{1}(\tau> t)\bar g(Z_{t+1})^2\right] - \mathbb{E}\left[\bar g(Z_{t+1})\bar P [\bar g](Z_t)\mathds{1}(\tau> t)\right]\\
& = & \mathbb{E}\left[\mathds{1}(\tau> t)\bar g(Z_{t+1})^2\right] - \mathbb{E}\left[\left(\bar P [\bar g](Z_t)\right)^2\mathds{1}(\tau> t)\right] \;( \mbox{by conditioning on } \F_t)\\
& \leq & \mathbb{E}\left[\mathds{1}(\tau> t)\bar g(Z_{t+1})^2\right]\\
& \leq & |g|_{V^\beta}^2\mathbb{E}\left[\mathds{1}(\tau> t)\left(V^\beta(X_{t+1}) + V^\beta(Y_t)\right)^2\right]\\
& \leq & C\delta_\beta^{t},\end{aligned}$$ using the same arguments as above. On the other hand, for all $t\geq k$, the term $\mathbb{E}\left[ \bar g^2(Z_{t})\mathds{1}_{\{\tau> t\}}\right]$ satisfies $$\mathbb{E}\left[ \bar g^2(Z_{t})\mathds{1}(\tau> t)\right] \leq |g|_{V^\beta}^2\mathbb{E}\left[\left(V^\beta(X_t) + V^\beta(Y_{t-1})\right)^2\mathds{1}(\tau> t)\right]\leq C\delta_\beta^t,$$ as seen above. In conclusion, all the expectations appearing in are upper bounded by some constant times terms of the form $\delta_\beta^t$. We conclude that $$\mathbb{E}\left[(S_k^{(N)})^2\right] \leq C\left(b_k^2\delta_\beta^k + b_N^2\delta_\beta^N + \sum_{t=k}^{N-1}b_k^2\delta_\beta^t + \left[\sum_{t=k}^{N-1} |b_{t+1}-b_t|\delta_\beta^t\right]^2\right).$$ Letting $N\to\infty$, we conclude that $$\mathbb{E}[S_k^2] \leq C \left(b_k^2\delta_\beta^k + \sum_{j\geq k}b_j^2\delta_\beta^j + \left[\sum_{j\geq k} |b_{j+1}-b_j|\delta_\beta^j\right]^2\right).$$ In the particular case of $\eta_{k:m}$, we have $\eta_{k:m} = \mathds{1}(\tau>k) \sum_{t=k}^{\tau-1}
\min\left(1,\frac{t+1-k}{m+1-k}\right)\left(h(X_{t+1})-h(Y_{t})\right)$. Hence $b_k = 0$, $b_t = (t-k)/(m-k+1)$ if $k< t\leq m+1$, $b_t=1$ if $t> m+1$. We then obtain the bound of Proposition \[prop1\].
Proof of Proposition \[prop:vgeometric\]
----------------------------------------
Here $Z_n$ is defined as $(X_n,Y_{n-1})$ for all $n\geq 1$. Assumption (\[mino\]) implies that for $(x,y) \in \mathcal{C}\times
\mathcal{C}$, $\bar P$ can be written as a mixture $$\bar P\left((x,y),
dz\right) = \epsilon_{x,y}\nu_{x,y}( dz) + (1-\epsilon_{x,y}) R\left((x,y),
dz\right),$$ where $\epsilon_{x,y}\geq \epsilon$, $\nu_{x,y}( dz)$ is a restriction of $\bar P\left((x,y), dz\right)$ on $\mathcal{D}$ (that is for any measurable subset $A$ of $\mathcal{D}$, $\bar P\left((x,y), A\right) = P\left((x,y), A\cap\mathcal{D}\right)/P\left((x,y), \mathcal{D}\right)$), and $R\left((x,y), dz\right)$ is the restriction of $\bar P\left((x,y), dz\right)$ on $(\mathcal{X}\times \mathcal{X})\setminus \mathcal{D}$. This means that whenever $(x,y)\in \mathcal{C}\times \mathcal{C}$ one can sample from $\bar P((x,y),\cdot)$ by drawing independently a Bernoulli random variable $J$, with probability of success $\epsilon_{x,y}$. Then if $J=1$, we draw from $\nu_{x,y}$, if $J=0$, we draw from $R\left((x,y), \cdot\right)$. From this decomposition, the proof of the proposition follows the same lines as in [@doucetal04], and we give the details only for completeness. We cannot directly invoke their result since their assumptions do not seem to apply to our setting.
Set $\bar V(x,y) = \frac{1}{2}(V(x) + V(y))$. First we show that the bivariate kernel satisfies a geometric drift towards $\mathcal{C}
\times \mathcal{C}$. That is, there exists $\alpha\in (0,1)$ such that $$\label{biv:drift}
\bar P\bar V(x,y) \leq \alpha \bar V(x,y),\;\;\;(x,y)\notin\mathcal{C}\times\mathcal{C}.$$ Indeed for $(x,y)\notin \mathcal{C}\times \mathcal{C}$, since $V\geq 1$, and $\mathcal{C} = \{V\leq L\}$, $\bar V(x,y) \geq (1+L)/2$. In other words, $\frac{1}{2} \leq \bar V(x,y)/(1+L)$. Therefore, $$\bar P\bar V(x,y) = \frac{1}{2}\left(PV(x) + PV(y)\right) \leq \lambda\bar V(x,y) + \frac{b}{2}\leq \lambda\bar V(x,y) + \frac{b}{1+L} \bar V(x,y)\leq \alpha\bar V(x,y),$$ with $\alpha = \lambda +\frac{b}{1+L}<1$. We set $$B = \max\left(1,\frac{1}{\alpha}\sup_{(x,y)\in\mathcal{C}\times\mathcal{C}}\frac{\bar P[\bar V\mathds{1}_{\D^c}](x,y)}{\bar V(x,y)}\right) \leq \frac{\lambda + b}{\alpha}.$$ In this section $\mathds{1}_\mathcal{S}(\cdot)$ refers to the indicator function on the set $\mathcal{S}$. Let $N_n$ denote the number of visits to $\mathcal{C}\times\mathcal{C}$ by time $n$. Then $$\mathbb{P}(\tau>n) = \mathbb{P}(\tau>n,\; N_{n-1}\geq j) + \mathbb{P}(\tau>n,\; N_{n-1}<j).$$ The event $\{\tau>n,\; N_{n-1}\geq j\}$ implies that no success occurred within at least $j$ independent Bernoulli random variables each with probability of success at least $\epsilon$. Hence $$\mathbb{P}(\tau>n,\; N_{n-1}\geq j)\leq (1-\epsilon)^j.$$ For the second term, we have (since $B\geq 1$, and the chains stay together after meeting via Assumption \[assumption:sticktogether\]), $$\mathbb{P}(\tau>n,\; N_{n-1}\leq j-1) \leq \mathbb{P}\left(Z_n\notin \D, B^{-N_{n-1}} \geq B^{-(j-1)}\right)= \mathbb{P}\left(\mathds{1}_{\D^c}(Z_n) B^{-N_{n-1}}\geq B^{-(j-1)}\right).$$ Then use Markov’s inequality to conclude that $$\begin{gathered}
\mathbb{P}(\tau>n,\; N_{n-1}\leq j-1) \leq B^{j-1} \mathbb{E}\left[\mathds{1}_{\D^c}(Z_n) B^{-N_{n-1}}\right]\\
\leq B^{j-1} \mathbb{E}\left[\mathds{1}_{\D^c}(Z_n) B^{-N_{n-1}}\bar V(Z_n)\right] = \alpha ^n B^{j-1} \mathbb{E}[M_n],\end{gathered}$$ where $M_n = \mathds{1}_{\D^c}(Z_n) \alpha^{-n} B^{-N_{n-1}}\bar V(Z_n)$ (set $N_{0}=0$ so that $M_1$ is well-defined). The result follows by noting that $\{M_n,\F_n\}$ is a super-martingale, where $\F_n = \sigma(Z_1,\ldots,Z_n)$, so that $\mathbb{E}[M_n]\leq \mathbb{E}[M_1]\leq \pi_0(V)+\pi_0(PV)\leq (1+\lambda)\pi_0(V)+b$, which implies that $$\mathbb{P}(\tau>n) \leq (1-\epsilon)^j + \alpha^n B^{j-1} \left((1+\lambda)\pi_0(V)+b\right).$$ Since $\alpha < 1$, there exists an integer $k_0\geq 1$ such that $\alpha B^{\frac{1}{k_0}}< 1$. In that case for $n\geq k_0$ one can take $j=\lceil n/k_0\rceil$, to get $$\mathbb{P}(\tau>n)\leq \left\{(1-\epsilon)^\frac{1}{k_0}\right\}^n + \left((1+\lambda)\pi_0(V)+b\right) \left\{\alpha B^{\frac{1}{k_0}}\right\}^n ,$$ as claimed.
The argument that $\{M_n,\F_n\}$ is a super-martingale is as follows. We need to show that for all $n\geq 1$, $\mathbb{E}[M_{n+1}\vert \F_n]\leq M_n$. Note that $\mathbb{E}[M_{n+1}\vert \F_n] = 0\leq M_n$ on $Z_n\in \D$. So it is enough to assume that $Z_n \notin\D$. Now, suppose also that $Z_n\notin \mathcal{C}\times\mathcal{C}$. Then $N_n=N_{n-1}$, and $$\begin{aligned}
\mathbb{E}[M_{n+1}\vert \F_n] & = & \alpha^{-n-1}\mathbb{E}\left[B^{-N_{n-1}} \mathds{1}_{\D^c}(Z_{n+1})\bar V(Z_{n+1})\vert \F_n\right],\\
& =& \alpha^{-n-1} B^{-N_{n-1}} \mathbb{E}\left[\mathds{1}_{\D^c}(Z_{n+1})\bar V(Z_{n+1})\vert Z_n\right],\\
& \leq & \alpha^{-n-1} B^{-N_{n-1}} \mathbb{E}\left[\bar V(Z_{n+1})\vert Z_n\right],\\
& \leq & \alpha^{-n} B^{-N_{n-1}} \bar V(Z_{n}),\\
& = & M_n.\end{aligned}$$ Suppose now that $Z_n\in \mathcal{C}\times\mathcal{C}$. Then $N_n = N_{n-1}+1$. Hence $$\begin{aligned}
\mathbb{E}[M_{n+1}\vert \F_n] & = & \alpha^{-n-1}B^{-N_{n-1}-1}\mathbb{E}\left[\mathds{1}_{\D^c}(Z_{n+1})\bar V(Z_{n+1})\vert \F_n\right],\\
& = & \alpha^{-n} B^{-N_{n-1}}\bar V(Z_n) \frac{1}{\alpha B}\frac{\bar P[\mathds{1}_{\D^c}\bar V](Z_n)}{\bar V(Z_n)} ,\\
& \leq & \alpha^{-n} B^{-N_{n-1}}\bar V(Z_n) = M_n.\end{aligned}$$
[^1]: Department of Statistics, Harvard University, Cambridge, USA. Email: pjacob@fas.harvard.edu
[^2]: Department of Statistics, Harvard University, Cambridge, USA. Email: joleary@g.harvard.edu
[^3]: Department of Statistics, University of Michigan, Ann Arbor, USA. Email: yvesa@umich.edu
[^4]: Link: [github.com/pierrejacob/debiasedmcmc](https://github.com/pierrejacob/debiasedmcmc).
|
---
abstract: 'X-ray reflectivity studies demonstrate the condensation of a monovalent ion at the electrified interface between electrolyte solutions of water and 1,2-dichloroethane. Predictions of the ion distributions by standard Poisson-Boltzmann (Gouy-Chapman) theory are inconsistent with these data at higher applied interfacial electric potentials. Calculations from a Poisson-Boltzmann equation that incorporates a non-monotonic ion-specific potential of mean force are in good agreement with the data.'
author:
- Nouamane Laanait
- Jaesung Yoon
- Binyang Hou
- 'Mark L. Schlossman'
- Petr Vanysek
- Mati Meron
- Binhua Lin
- Guangming Luo
- Ilan Benjamin
bibliography:
- 'bibliotheque.bib'
title: 'Monovalent Ion Condensation at the Electrified Liquid/Liquid Interface'
---
Interfacial ion distributions underlie numerous electrochemical and biological processes, including electron and ion transfer across charged biomembranes and energy storage in electrochemical capacitors. The solution to the Poisson-Boltzmann equation for a planar geometry, Gouy-Chapman theory, including modifications with a Stern layer, is often used to predict ion distributions near those interfaces.[@Gouy1910; @*Chapman1913; @*Stern1924] We showed previously that the predictions of such theories are inconsistent with x-ray reflectivity measurements of ion distributions at an electrified liquid/liquid interface.[@Luo2006a; @*Luo2006b] Instead, an ion-specific Poisson-Boltzmann equation (PB-PMF) that incorporated a potential of mean force (PMF) for each ion produced excellent agreement with the x-ray results.[@Luo2006a; @*Luo2006b] The PB-PMF theory accounts for interactions and correlations between ions and solvents that are left out of Gouy-Chapman theory. This approach has promise for understanding ion-specific effects that are relevant to many chemical processes.[@Lima2008a]\
![Circular glass sample cell and x-ray kinematics. Electrochemical cell diagram: Ag$\arrowvert$AgCl $\arrowvert$ 0.1M NaCl $\arrowvert$ water $\arrowvert$ +20 mM HEPES $\arrowvert$ 5 mM BTPPATPFB $\arrowvert$ DCE $\arrowvert$ 10 mM LiCl+1 mM BTPPACl $\arrowvert$ water $\arrowvert$ AgCl $\arrowvert$ Ag. A four-electrode potentiostat (Solartron 1287) is used to apply potential at counter electrodes (CE$_{1,2}$, 9 cm$^{2}$ Pt mesh) and monitor potential at reference electrodes (RE$_{1,2}$) in Luggin capillaries within 4 mm of interface. The liquid/liquid interface of 7 cm diameter is pinned by a Teflon strip (affixed to the glass wall) and flattened by adjusting the volume of DCE phase. Volume ratio of water:DCE is 2:1. The x-ray wave vector transfer $\vec{Q}=\vec{k}_{out}-\vec{k}_{in}$.[]{data-label="fig:cell"}](figures/FIG1)
Here, we demonstrate the condensation of a monovalent ion at a liquid/liquid interface. Recent theory proposes that condensation of multivalent ions is the result of strong ion-ion correlations.[@Grosberg2002] However, these theories do not predict such distributions for monovalent ions. Our current results can be understood by PB-PMF theory. We have chosen to fit our x-ray data to the potentials of mean force instead of fitting to a model of the electron density profile because a single PMF for each ion determines the ion distributions for all interfacial potentials.\
The system under study is the liquid/liquid interface between a 100 mM aqueous solution of NaCl (Fisher Scientific) including 20 mM HEPES to buffer the pH to 7.0, and a 5 mM solution of bis(triphenyl phosphoranylidene) ammonium tetrakis(pentafluorophenyl) borate (BTPPA$^{+}$, TPFB$^{-}$) in 1,2-dichloroethane (DCE, Fluka). Water was produced by a Barnstead Nanopure system and DCE was purified using a column of basic alumina. BTPPATPFB was synthesized from BTPPACl (Aldrich) and LiTPFB (Boulder Scientific).[@Fermin1999a] Conductance measurements using the method in Ref. [@Raymond1949] determined that 54% of BTPPATPFB is dissociated in DCE.\
The electric potential difference $\Delta\phi^{w-o}(=\phi^{water} - \phi^{oil})$ between the water and oil (DCE) phases is given by the applied potential difference across the electrochemical cell (Fig.\[fig:cell\]) minus the potential of zero charge ($\Delta\phi^{w-o}=\Delta\phi^{w-o}_{applied} - \Delta\phi^{w-o}_{pzc}$). We determined $\Delta\phi^{w-o}_{pzc}= 318\pm 3$ mV by measuring the interfacial tension as a function of $\Delta\phi^{w-o}_{applied} $[@Schmickler1996]. The ions Na$^{+}$ and Cl$^{-}$ stay primarily in the aqueous phase and BTPPA$^{+}$ and TPFB$^{-}$ stay in the DCE phase throughout the potential range studied. When $\Delta\phi^{w-o} \neq0$, the ions form back-to-back electrical double layers at the interface. For example, when $\Delta\phi^{w-o} > 0$, the concentrations of Na$^{+}$and TPFB$^{-}$ are enhanced at the interface while those of Cl$^{-}$ and BTPPA$^{+}$ are depleted. The variation of ionic concentration along the interfacial normal produces a variation in the electron density profile $\rho(z)$ (averaged over the x-y plane) that is probed by x-ray reflectivity.\
![X-ray reflectivity $R(Q_{z})$ normalized to Fresnel reflectivity $R_{F}(Q_{z})$ for various potentials across the water/1,2-dichloroethane interface as a function of wave vector transfer normal to the interface at T=296 K. From top to bottom $\Delta\phi^{w-o}=0.33$ V ($\circ$), $0.28$ V ($\bullet$), $0.18$ V($\circ$), $0.08$ V ($\bullet$), $-0.02$ V($\circ$), and $-0.12$ V($\diamond$) (offset for viewing purposes). Dashed lines: Gouy-Chapman theory. Solid lines: PB-PMF. Red and blue lines indicate the use of two different PMFs for TPFB$^{-}$ (see text).[]{data-label="fig:RRf"}](figures/FIG2)
X-ray reflectivity measurements $R(Q_{z})$ from the electrified liquid/liquid interface were carried out at the ChemMatCARS sector of the Advanced Photon Source.[@Schlossman1997a] $R(Q_{z})$ is the reflected intensity normalized by the incident intensity (after subtraction of background scattering [@Zhang1999]) as a function of wave vector transfer $Q_{z}=(4\pi$/$\lambda)\sin\alpha$ , where $\lambda$ (=0.41255 $\pm$0.00005 Å) is the x-ray wavelength and $\alpha$ is the angle of incidence (Fig. 1). Figure 2 illustrates $R(Q_{z})$/$R_{F}(Q_{z})$ for different $\Delta\phi^{w-o}$. The variation of the peak amplitude in $R$/$R_{F}$ with increasing $\Delta\phi^{w-o}$ reveals the formation of a TPFB$^{-}$ layer, as discussed below.\
![Potentials of mean force for BTPPA$^{+}$ (black) and TPFB$^{-}$ \[$W_{TPFB^{-}}^{I}(z)$:red,$W_{TPFB^{-}}^{II}(z)$:blue\] determined by fitting the reflectivity data in Fig. \[fig:RRf\]. PMFs for Na$^{+}$ (green dots) and Cl$^{-}$ (circles) were calculated by MD simulations (see text) (Ref.[@Wick2008].[]{data-label="fig:PMF"}](figures/FIG3)
We analyzed the x-ray data using the Poisson-Boltzmann (PB) equation, $$\frac{d^{2}\phi(z)}{dz^{2}}=- \frac{1}{\varepsilon_{o}\varepsilon}\sum_{i}e_{i}c_{i}^{o} \exp[-\Delta E_{i}(z)/k_{B}T],
\label{eq:pb}$$ which relates the electric potential $\phi(z)$ along the interfacial normal $z$ to the concentration profile of ion i, $c_{i}(z)=c_{i}^{o}\exp[-\Delta E_{i}(z)/k_{B}T]$ , with Boltzmann constant $k_{B}$, temperature $T$, charge $e_{i}$ of ion $i$ (BTPPA$^{+}$, TPFB$^{-}$, Na$^{+}$, and Cl$^{-}$), permittivity of free space $\varepsilon_{o}$, and dielectric constant $\varepsilon$ of either DCE (10.43) for $z<0$ or water (78.54) for $z>0$ . $\Delta E_{i}(z)$ is the energy of ion $i$ relative to its value in the bulk phase. The bulk ion concentration $c_{i}^{o}$ is calculated from the Nernst equation[ [^1] ]{}$^{,}$[@Volkov1996]. Fitting to the data involves calculating the electron density $\rho(z)$ and $R/R_{F}$ from the ion concentration profiles $c_{i}(z)$ as described previously.[@Luo2006a; @*Luo2006b] For the purpose of calculating $\rho(z)$ from $c_{i}(z)$ the ions were modeled as spheres of diameter 2[Å]{} for Na$^{+}$, 3.5[Å]{} for Cl$^{-}$, 12.6[Å]{} for BTPPA$^{+}$, and 10[Å]{} for TPFB$^{-}$, where the latter were estimated from the crystal structure of BTPPATPFB.[@Marcus1988]$^{,}$ [^2] The TPFB$^{-}$ ion provides the dominant ionic contribution to the electron density profile when $\Delta\phi^{w-o}>0$ .\
The Gouy-Chapman theory assumes that $E_{i}(z)=e_{i}\phi(z)$ in Eq. \[eq:pb\]. Fits of $R/R_{F}$ to predictions of Gouy-Chapman theory (Fig. \[fig:RRf\], dashed lines) used only the interfacial roughness and a $Q_{z}$ offset ( $10^{-4}$[Å]{}$^{-1}$, a typical misalignment of the reflectometer) as fitting parameters.
[l \*[6]{}[c]{}]{}\
\
[l \*[6]{}[c]{}]{} &$W(0)$ & $L^{o}$ & $L^{w}$ & $z_{0}$ & $\sigma_{PMF}$ & $D$\
&$(k_{B}T)$ & (Å) & (Å) & (Å) & (Å) & $(k_{B}T)$\
\
$W_{TPFB^{-}}^{I}(z)$ &$-5\pm0.5$ &$3\pm0.1$ & $9\pm4$ &$-3.5\pm0.2$ & $3.4\pm0.2$ & $-9\pm0.25$\
$W_{TPFB^{-}}^{II}(z)$ &$-25\pm0.5$ &$11\pm0.5$ &$10\pm2.7$ &$-7.5\pm0.3$ &$2.6\pm0.3$ &$-5.25\pm0.2$\
\[table1\]
These fits agree with the data at small $\Delta\phi^{w-o}$ ($-0.12$ V to $0.18$ V), but at larger $\Delta\phi^{w-o}$ (0.28 V and 0.33 V) $R/R_{F}$ is greatly overestimated primarily because Gouy-Chapman theory predicts unphysically large ion concentrations near the interface.\
Ion-specific effects can be included in Eq. \[eq:pb\] by expressing $E_{i}(z)\approx e_{i}\phi(z)+W_{i}(z)$, where $W_{i}(z)$ is the potential of mean force (PMF) for each ion $i$.[@Luo2006a; @*Luo2006b; @Horinek2007; @Daikhin2001] The PMF of Na$^{+}$ was calculated from a molecular dynamics (MD) simulation for a single ion.[@Benjamin2008] The PMF of Cl$^{-}$ was taken from an MD simulation in the literature.[@Wick2008] Fig. \[fig:PMF\] illustrates the monotonic variation of $W_{i}(z)$ for Na$^{+}$ and Cl$^{-}$. Due to the computational difficulties of simulating $W_{i}(z)$ for large molecular ions such as BTPPA$^{+}$ and TPFB$^{-}$ we used a phenomenological PMF previously introduced in Ref.[@Luo2006a; @*Luo2006b], $$W_{i}(z)=(W_{i}(0)-W_{i}^{p})\frac{\mathrm{erf}[|z|-\delta_{i}^{p}/L_{i}^{p}]}{\mathrm{erfc}[-\delta_{i}^{p}/L_{i}^{p}]}+W_{i}^{p},
\label{eq:erf_pmf}$$ where $p(= w, o)$ refers to either the water phase $(z\geq0 )$ or the oil phase (DCE,$z\leq0$), $W_{i}^{o}-W_{i}^{w}$ is the Gibbs energy of transfer of ion $i$ from water to oil, $\delta_{i}^{p}$ is an offset to ensure continuity of $W_{i}(z)$ at $z = 0$, and $L_{i}^{p}$ characterizes the decay of $W_{i}(z = 0)$ to its bulk values $W_{i}^{w}$ and$W_{i}^{o}$. We used this monotonic PMF for BTPPA$^{+}$, but had to modify it for TPFB$^{-}$, as described below. Since $W_{i}^{o}-W_{i}^{w}$ for BTPPA$^{+}$ is known (Ref. [ ]{}) , the PMF of BTPPA$^{+}$ is characterized by 3 parameters determined by fitting to $R/R_{F}$ data at $\Delta\phi^{w-o}= -0.12$ V where it is expected that the BTPPA$^{+}$ interfacial concentration is enhanced: $L^{w}_{BTPPA^{+}} (=14 +12/-6$[Å]{}), $L^{o}_{BTPPA^{+}} (=20 +11/-6$[Å]{}), and $W_{BTPPA^{+}}(0) (=13\pm 2 k_{B}T )$. The large error bars on the PMF of BTPPA$^{+}$ are due to the small magnitude of the most negative $\Delta\phi^{w-o}$ that we studied.
![(left) Ion concentration profiles (in units of molarity) at $\Delta\phi^{w-o}=0.33$V calculated from PB-PMF. BTPPA$^{+}$ (black), TPFB$^{-}$ \[$W_{TPFB^{-}}^{I}(z)$ :red,$W_{TPFB^{-}}^{II}(z)$ :blue\], Na$^{+}$ (dots) and Cl$^{-}$ (circles). (right) Electron density profiles for various potentials calculated from PB-PMF. Top to bottom: $\Delta\phi^{w-o}=0.33$V \[$W_{TPFB^{-}}^{I}(z)$:red,$W_{TPFB^{-}}^{II}(z)$:blue\], 0.28V (dashed), 0.18V (solid), 0.08(dashed), $-0.02$V (solid), and $-0.12$ V (dashed).[]{data-label="fig:iondens"}](figures/FIG4A "fig:") ![(left) Ion concentration profiles (in units of molarity) at $\Delta\phi^{w-o}=0.33$V calculated from PB-PMF. BTPPA$^{+}$ (black), TPFB$^{-}$ \[$W_{TPFB^{-}}^{I}(z)$ :red,$W_{TPFB^{-}}^{II}(z)$ :blue\], Na$^{+}$ (dots) and Cl$^{-}$ (circles). (right) Electron density profiles for various potentials calculated from PB-PMF. Top to bottom: $\Delta\phi^{w-o}=0.33$V \[$W_{TPFB^{-}}^{I}(z)$:red,$W_{TPFB^{-}}^{II}(z)$:blue\], 0.28V (dashed), 0.18V (solid), 0.08(dashed), $-0.02$V (solid), and $-0.12$ V (dashed).[]{data-label="fig:iondens"}](figures/FIG4B "fig:")
The x-ray reflectivity at the two highest positive potentials cannot be fit if Eq. \[eq:erf\_pmf\] is used to model the PMF for TPFB$^{-}$. The simplest model that will produce the peaks in Fig. \[fig:PMF\] is a single layer of TPFB$^{-}$ ions at the interface (note that a layer of Na$^{+}$, whose concentration is also enhanced at the interface, cannot provide the x-ray contrast required to fit the data). The TPFB$^{-}$ layer is modeled by an attractive well in the PMF. $W_{TPFB^{-}}(z)$ is given by Eq. \[eq:erf\_pmf\] plus a Gaussian function $D\exp[-(z-z_{0})^{2}/2\sigma^{2}_{PMF}$\] for $z<0$ along with a constant offset at $z=0$ to maintain continuity (see Fig.\[fig:PMF\]). Analysis with a Lorentzian produced similar results.\
The six parameters of $W_{TPFB^{-}}(z)$ \[$z_{0}$, $D$, $\sigma_{PMF}$, $L^{w}_{TPFB^{-}}$, $L^{o}_{TPFB^{-}}$, $W_{TPFB^{-}}(0)$\] along with the $Q_{z}$ offset and the interfacial roughness ( $4.3 \textrm{\AA} < \sigma<5.1\textrm{\AA}$) are determined by fitting $R/R_{F}$ measured at $\Delta\phi^{w-o}= 0.28$ V and 0.33 V, where the concentration of TPFB$^{-}$ is enhanced at the interface (Table \[table1\]). This fitting is performed under the constraint that the resultant $W_{ion}(z)$ produces $R/R_{F}$ in agreement with the data over the *entire* range of potentials. In addition, fitted PMFs were rejected if the fit value of the roughness $\sigma$ was unphysically small. In those cases an interfacial bending modulus[@Safran1994] on the order of $1000 k_{B}T$ would have been required to reconcile the discrepancy of $\sigma$ with its value predicted by capillary wave theory[@Luo2006]. In the case of the TPFB$^{-}$ PMF, two local minima in $\chi^{2}$-space (denoted $W_{TPFB^{-}}^{I}(z)$ and $W_{TPFB^{-}}^{II}(z)$ ) were found to satisfy these conditions. Potential profiles that are intermediate between $W_{TPFB^{-}}^{I}(z)$ and $W_{TPFB^{-}}^{II(z)}$ do not satisfy these conditions. Most of these fits had values of $\sigma$ within one standard deviation of capillary wave theory predictions using the measured potential-dependent interfacial tension.[@Luo2006a; @Luo2006b]. Fits to $W_{TPFB^{-}}^{II}(z)$ at $\Delta\phi^{w-o}= 0.28$ V and 0.33 V had values of $\sigma$ within two standard deviations of capillary wave theory.\
The PB-PMF model with the $W_{i}(z)$ shown in Fig. \[fig:PMF\] produces $R/R_{F}$ in good agreement with the data over the entire range of measured potentials (Fig. \[fig:RRf\]). The attractive wells for $W_{TPFB^{-}}^{I,II}$ have comparable depths (6 $k_{B}T$ for $W_{TPFB^{-}}^{I}(z)$ and 5 $k_{B}$T for $W_{TPFB^{-}}^{II}(z)$), FWHM, and centers (Table 1). The ion concentration profiles $c_{i}(z)$, are calculated from Eq. \[eq:pb\] using $W_{i}(z)$. Figure \[fig:iondens\] shows that the $c_{i}(z)$ at the highest potential, $\Delta\phi^{w-o}= 0.33$ V, take the form of two back-to-back double layers with a sharply defined layer of TPFB$^{-}$. The different $c_{i}(z)$ calculated from $W_{TPFB^{-}}^{I}(z)$ or $W_{TPFB^{-}}^{II}(z)$ differ mainly in the broadness of the profile, which in the case of $W_{TPFB^{-}}^{I}(z)$ returns to its bulk value at $z=0$, while $W_{TPFB^{-}}^{II}(z)$ allows TPFB$^{-}$ to penetrate slightly more into the water phase. The electron density profiles $\rho(z)$ calculated from the different $c_{i}(z)$ are almost identical (Fig. \[fig:PMF\]), which demonstrates why our data cannot discriminate between $W_{TPFB^{-}}^{I}(z)$ and $W_{TPFB^{-}}^{II}(z)$ .\
The maximum density of TPFB$^{-}$ near the interface occurs at $\Delta\phi^{w-o}= 0.33$ V and is 1 nm$^{2}$ per TPFB$^{-}$ ion when $W_{TPFB^{-}}^{I}(z)$ is used or 1.5 nm$^{2}$ per TPFB$^{-}$ ion when $W_{TPFB^{-}}^{II}(z)$ is used. Both values represent a high-density layer for an ion of 1 nm diameter. Although dense ionic layers have been observed in the interfacial adsorption of charged amphiphiles, [@Leveiller1991] the absence of a dense TPFB$^{-}$ layer at $\Delta\phi^{w-o}\approx0$ indicates that TPFB$^{-}$ is, at most, weakly amphiphilic.\
Simulations and supporting spectroscopy experiments indicate that highly polarizable ions (such as I$^{-}$ with a polarizability of 7.4[Å]{}$^{3}$) are preferentially adsorbed to the water/vapor interface, though dense layers are not expected or observed.[@Winter2006; @*Dang2002] We calculated the polarizability of TPFB$^{-}$ to be 42.9[Å]{}$^{3}$.[@g03]$^{,}$[^3] This large polarizability may play a role in forming a dense layer at high potentials. Also, Borukhov *et al.* suggested that the entropy of the solvent can stabilize large ion adsorption.[@Borukhov1997] Additional theoretical work is required to determine the relevance of these two effects for the data presented here.\
The MD simulations of the potentials of mean force that we used for Na$^{+}$ and Cl$^{-}$ do not account for ion-ion correlations, but they do include ion-solvent and solvent-solvent correlations. Such correlations also account for the monotonic form of $W_{BTPPA^{+}}(z)$ . However, as a result of modeling the x-ray reflectivity, the phenomenological $W_{TPFB^{-}}(z)$ in Fig. \[fig:PMF\] must implicitly account for ion-ion correlations if they are important for the observed condensation. The description of this monovalent ion condensation within PB-PMF theory illustrates the utility of this approach in describing ion-specific effects that are important for the behavior of ions in soft matter.
MLS, PV and IB acknowledge support from NSF-CHE. NL acknowledges support from a UIC University Fellowship and the GAANN program. ChemMatCARS is supported by NSF-CHE, NSF-DMR, and the DOE-BES. The APS at Argonne National Laboratory is supported by the DOE-BES.
[^1]: Gibbs energies of transfer: Na$^{+}$ (57 kJ/mol), Cl$^{-}$ (53 kJ/mol), BTPPA$^{+}$ (56 kJ/mol), TPFB$^{-}$ (72.5 kJ/mol), last two measured by partitioning via UV-visible spectroscopy and mass spectroscopy
[^2]: C.Zheng and P. Vanysek (unpublished)
[^3]: TPFB$^{-}$ geometry optimized at B3LYP/6-311++G(d,p) level with tightest constraints on convergence (opt=very tight, int=ultra fine). Resultant geometry agreed closely with crystal structure. Polarizability tensor calculated from optimized structure at the same level of theory.
|
---
abstract: 'Using Cu NQR in Eu-doped $\rm La_{2-x}Sr_xCuO_4$ we find the evidence of the pinned stripe phase at 1.3K for $0.08\leq x\leq 0.18$. The pinned fraction increases by one order of magnitude near hole doping $x=1/8$. The NQR lineshape reveals three inequivalent Cu positions: i) sites in the charged stripe; ii) nonmagnetic sites outside the stripes; iii) sites with a magnetic moment of 0.29$\mu_B$ in the AF correlated regions. A dramatic change of the NQR signal for $x > 0.18$ correlating with the onset of bulk superconductivity corresponds to the depinning of the stripe phase.'
address: |
$^1$Institute for Technical Physics, 420029 Kazan, Russia\
$^2$II Physikalisches Institut, Universität zu Köln, D-50937 Köln, Germany\
$^3$IFF Forschungszentrum Jülich, D-52425 Jülich, Germany
author:
- 'G.B.Teitel’baum,$^1$ B.Büchner,$^2$ H.de Gronckel$^3$'
title: The Cu NQR Study of the Stripe Phase Local Structure in the Lanthanum Cuprates
---
-1cm 0 cm 0 cm
The recognition is growing that doping of the antiferromagnetic (AF) insulating phase of a high-$T_c$ superconductor by holes has an explicit topological character. In fact, according to the time reversal symmetry the segregation of charges to periodical domain walls (stripes) requires an antiphase arrangement of the created AF domains [@3; @2; @3b]. The first evidence for such stripe phase has been provided by neutron studies of the low temperature tetragonal (LTT) phase of Nd-doped $\rm La_{2-x}Sr_xCuO_4$ [@1]. A number of recent papers confirmed the presence of stripe correlations in the other cuprates as well[@5; @5a; @5b]. But in spite of the hot interest to the problem suprisingly little is known about the local properties of the stripe structure.
In this Letter we report the results of the direct study of the stripe phase local structure by means of Cu NQR. The application of NMR and NQR for study of stripes meets serious difficulties due to the slowing of the charge fluctuations down to MHz frequency range which wipes out a large part of the nuclei from the resonance. An important breakthrough[@7] based on the quantitative analysis of the fraction of the nuclei wiped out from the NQR brought insight in the behaviour of the stripe phase order parameter both in cuprates [@7] and nickelates [@7a].
Unfortunately, based on wipeout effects alone, it is impossible to determine the local structure of the stripes, i.e. the charge and internal magnetic field distribution, and the values of the typical local parameters. Such information can be obtained only from the NQR analysis of the stripe phase itself, which is possible after reappearance of the signal in the slow fluctuation limit. The pinning of stripes at low temperatures enables us to take the advantages of the extreme sensitivity of Cu NQR to the local charge and magnetic field distribution. In addition to the measurements at temperatures of 1.3 K, this program could be realized easier for the LTT structure, which is helpful for pinning of the stripe structure. This structure was induced by doping with non-magnetic Eu rare-earth ions instead of magnetic Nd ones (the ordering of Nd moments causes fast Cu nuclear relaxation hindering the observation of Cu NQR). We expect that in the stripe structure the different Cu sites will be inequivalent with respect to the NQR, providing information on the local properties at given points of the structure.
For our experiments we have chosen fine powders of the series $\rm La_{2-x-y}Eu_ySr_xCuO_4$ with variable Sr content $x$ and fixed Eu content $y =0.17$. The preparation of single phase samples was described in [@8]. It was found [@8] that for such Eu content the LTT phase is realized for $x>0.07$. For Sr concentrations $x>0.12$ the ac-susceptibility and microwave absorption measurements reveale the presence of superconductivity with $T_c = 6;\ 9;\ 14;\ 19;\ 18;\ 16;\
13$K for resp. $x= 0.12;\ 0.13;\ 0.15;\ 0.18;\ 0.20;\;0.22; \ 0.24$. The superconducting fraction is small for $x\leq 0.18$ and starting from $x>0.18$ a transition to bulk superconductivity take place. The NQR measurements are performed with the standard spectrometer in the range 20 - 100 MHz. By lowering the temperature down to 1.3K we succeed to observe the Cu-NQR spectra at all Sr concentrations. Regarding their NQR properties the samples should be separated into two groups.
The first one corresponds to Sr concentrations $x\leq 0.18$. The superconducting fraction of these samples, if any, was rather small. Each of the spectra, which are very similar for $0.08\leq x\leq 0.18$, consists of a broad line in the region from 20 MHz up to 80 MHz with an unresolved peak between 30 and 40 MHz (examples of some spectra are shown in Fig. 1). The main distinctions of the spectra for different $x$ are i) the integral intensity of the spectra, which is peaked near $x=0.12$ (Fig. 2a), and ii) the temperatures below which it is possible to observe them (for $x= 0.12; 0.13$ they are observable even for the temperatures higher than 4.2K).
The second group of samples with $x>0.18$ showing bulk superconductivity posesses completely different and much narrower NQR spectra (Inset to Fig.1), which can also be observed at much higher temperatures. The intensity grows with increasing $x$ from 0.18 (Fig. 2b).
Beginning the discussion with the $x\leq 0.18$ group, we first consider the above mentioned complicated peak in the lineshapes. Since the natural abundance ratio of $^{63}$Cu and $^{65}$Cu is 2.235 and the ratio of their quadrupole moments is 1.081 it is clear that this peak contains more than one pair of $^{63}$Cu and $^{65}$Cu signals. The possibility that such a picture arises from one site due to the splitting of the signal by the hyperfine field can be ruled out by the different behaviour of the relative intensities of both components upon variation of $x$ and by their different echo decay times. The gaussian fit to these peaks reveals the existence of two independent copper sites $\it{1}$ and $\it{2}$ (we use this notation in order to distinguish them from the sites A and B known for the superconductors in the low temperature orthogonal (LTO) phase [@Yasuoka]), having different NQR frequencies (Fig. 3).
To make the site assignment, we note that the NQR frequency is sensitive to the the local hole concentration changing between 0.5 and 0 hole per Cu atom [@1]. In a linear approximation we obtain that for the given $x$ the resonant frequency $\nu_Q$ is connected with the local hole density $n(r)$ via the relation $\nu_Q(x,n)$ = $\nu_Q^0$ $-$ $\alpha$$x$ + $\beta$$n$ with the empirical constants $\alpha$ and $\beta$. The first term here is the NQR frequency for the compound with zero Sr content, the second one is due to the negative shift caused by the contraction of Cu-O bond length induced by the internal pressure appearing upon substitution of La with Sr, the third corresponds to the positive shift due to the local increase of the effective fractional charge on Cu. This expression agrees both with the calculations in the frames of the ionic [@11a] as well as of the cluster [@12] models (in the uniform case $n=x$).
It follows from our results (Fig.3) that the resonance frequencies $^{63}\nu_Q^{(1)}(x)$ for line $\it{1}$ are shifted to lower values from the reference value $^{63}\nu_Q(0,0)=\,^{63}\nu_Q^0$ (we use here $^{63}\nu_Q^0=31.9\ \rm MHz$ estimated for $\rm La_2CuO_4$ [@10]). This indicates that the positive contribution to $^{63}\nu_Q(x,n)$ is small and that the effective fractional charge on sites $\it{1}$ is near zero. It means that they are located in the regions free of doped holes. In contrast line $\it{2}$ is due to the sites which in addition to the negative shift exhibit a positive one. It means that these sites belong to the regions with an increased average charge (hole density) on the Cu ions.
The high frequency part of the spectrum can be analyzed by subtraction of the $\it{1}$ and $\it{2}$ contributions from the entire signal. The resulting spectra are shown in Fig.1. The frequencies corresponding to their maxima are plotted in Fig.3. We assume that this line corresponds to the broadened $(\pm1/2) \leftrightarrow (\mp1/2)$ transitions of nuclei located in sites $\it{3}$ experiencing an internal magnetic field (note the broad high-frequency shoulder). The satellites are unresolved due to inhomogeneities of the internal magnetic field and of the NQR frequencies. If the orientation of the internal field with respect to the electric field gradient is identical to that observed for $\rm La_2CuO_4$ [@10] the frequency of this transition enables us to estimate the quadrupole shift and to determine the Larmor frequency for this Cu site to be 45.2 MHz for $x=0.12$. It corresponds to an internal field of 40.1 kOe. Using the value of the hyperfine constant $\left|A_Q\right|=139 \pm{10}\ \rm{kOe}/\mu_B$ [@Imai] we estimate that in order to create such a field the effective magnetic moment of Cu at site $\it{3}$ has to be equal to 0.29 $\pm{0.02}\,\mu_B$, coinciding with the value obtained from neutron and muon experiments[@5; @13].
Since quantitatively similar spectra were observed for each compound of the first group we believe that they contain the same elementary “bricks” of the phase under study. Discussing the relative weight of the different contributions, we note that the echo decay can be described in terms of stretched exponents [exp]{}$[-(2t/T_2)^{a}]$ with different $T_2$ for each site. For $x=0.12$ the numerical fit of the measured echo decay at the frequencies of different sites gives the same $a\simeq 0.5$ and $T_2^{(1)}=11\,\mu$sec; $T_2^{(2)}=8.8\,\mu$sec; $T_2^{(3)}= 5.5\,\mu$sec. Such a relaxation law is typical for the relaxation via randomly distributed magnetic moments [@9] whereas the values of the relaxation rates depend on the location of these moments with respect to different Cu sites. Extrapolating the corresponding signal intensities to $t=0$ we find the contributions of sites $\it{1}$, $\it{2}$, $\it{3}$ to be given by the ratio (1:6:13).
As for the origin of the sites $\it{1}$ it is possible to conclude that on one hand they do not belong to the AF domains, and on the other hand they are outside of stripes since their effective charge is equal to zero. We assume that they correspond to defects terminating the stripes. From their relative number we estimate the average length of the stripe to equal at least 6 lattice constants.
It is important that the NQR frequencies for the site $\it{2}$ (See Fig.4) are almost the same for any $x$ thus indicating that for all Sr concentrations the stripes are equally charged. The effective charge in a stripe is near 0.18-0.19. This is larger than the average hole concentration (x) but less than 0.5 expected for the ideal stripe picture [@1]. It means that the charge is distributed over the domain wall of a finite thickness. Together with the above-mentioned intensity ratio this indicates that the real stripe picture differs from the ideal one. Another sign for this is the broadening of line $\it{2}$ due to a distribution of NQR frequencies. Its linewidth (Fig. 3) reflects the behaviour of the pinning: at $x=0.12$ where pinning is stronger, the narrowing due to the motion of stripes is weaker and the linewidth is larger. The decrease of the internal magnetic field with the deviation $x$ from 0.12 reflects the suppression of magnetic order by holes penetrating into AF domains.
The changes in intensity of the NQR spectra are due to variation of the number of “bricks” for the compounds with different Sr content, which depends on the pinning strength. Our results indicate, that the stripe phase is pinned at least for the time scales shorter than $10^{-6}$ sec (the same conclusion was reached also by La NQR [@16a]).
The pinning for $0.08\leq x\leq 18$ is due to the buckling of the CuO$_2$ plane. It is connected with the CuO$_6$ octaedra tilts around the \[100\] and \[010\] axis by the angle $\Phi$, which for given Eu substitution is governed by the Sr content. It follows from Fig. 2a, that the quantity of the pinned phase, which is proportional to the NQR signal intensity, is peaked at $x=0.12$. This indicates additional strong pinning due to the commensurability effect. Such pinning is not unique for the LTT phase (as buckling is). It is a manifestation of the plane character of the inhomogeneities of the charge and spin distributions. Together with the existence of three different Cu sites this gives an independent justification of the conventional stripe picture [@1], where the charges are uniformly distributed in rivers of holes across one Cu-chain separated by the bare three leg ladders (we do not discuss here the possibility of two magnetic sites which may be deduced from the wide distribution of the internal field seen in Fig. 1).
Upon increasing $x$ over $x=0.18$ the tilt angle is decreasing below the critical value $\Phi_c\simeq 3.6^o$ [@10c] and depinning of the stripe phase takes place. Such behaviour occurs for the compounds with Sr concentrations $x>0,18$ belonging to the second group for which the broad signals, typical for the pinned phase, disappear. The corresponding NQR spectra gradually transform to the narrow signal at higher frequencies, which for $x=0.24$ is shown in the inset to Fig. 1. The intensity of this line (proportional to the quantity of the unpinned stripe phase) is shown in Fig. 2b.
The analysis of this relatively narrow signal reveals only two different sites with $^{63}$Cu-NQR frequencies of 37.60 MHz and 39.82 MHz. Within 1% accuracy these frequencies coincide with those known at the same $x$ for the $\it{A}$ and $\it{B}$ sites in the LTO superconducting phase [@11] confirming that the LTT structure differs only in the directions of CuO$_6$ octaedra tilts. The satellite $\it{B}$ is due to Cu having a localized hole in the nearest surrounding since, according to [@12; @12a], its NQR frequency has the additional positive shift $\delta\nu_Q\simeq 2.5\ \rm MHz$. The observed transformation of the NQR spectra (in comparison with those for $0.08\leq
x\leq 0.18$) is due to the fast transverse motion of stripes in the depinned phase. As a result the internal magnetic field on Cu nuclei is averaged out, and the effective fractional charge is homogeneously distributed over all Cu nuclei giving the usual NQR frequencies. Such depinning leads to the drastic changes in the magnetic properties. The echo signals decay for samples with $x>0.18$ becomes purely exponential ($T_2^{(2)}=35.4\,\mu$sec for sample with $x=0.24$).
An important feature of the compounds with $0.08\leq x\leq 0.18$ is the possibility to observe the NQR line in the state without bulk superconductivity. Usually for $\rm La_{2-x}Sr_xCuO_4$ compounds for moderate doping within the so-called spin glass phase between $x\simeq 0.02$ and $x\simeq 0.06$ (the bulk superconductivity threshold) the fast relaxation via the localized moments[@9] hinders the observation of the Cu NQR. In our case the Cu NQR of compounds moderately doped with Sr is observable even in the absence of bulk superconductivity. It is an indication that we are dealing with the unusual correlated state where the magnetic moments created upon Sr doping are not effective in relaxation. Note, that at 1.3K for $x=0.12$ compound the entire stripe phase is pinned. This follows from the comparison of the number of Cu nuclei, responsible for the NQR, with that for $x=0.24$ compound (Inset to Fig.1), which is due to 100% of the Cu nuclei. Both these quantities were obtained by extrapolation the signals to $t=0$ and calculation of the integrated intensities.
It is also possible to make some remarks about the superconducting properties. The main is that the depinning point separates two different types of superconductivity. For $x\leq 0.18$ we are dealing with a weak Meissner effect, an increased London penetration length and with $T_c$ increasing with $x$ growing up to 0.18. Combining these facts with the absence of a narrow signal typical for the bulk superconducting phase, indicating that the impure LTO phase is absent, and with the suppression of the relaxation via magnetic moments of doped holes, one has arguments in favor of possible one-dimensional superconductivity along the charged rivers of stripes - the issue which is widely discussed now [@14]. For $x> 0.18$ we have bulk superconductivity with conventional London length, typical NQR signal and decreasing $T_c(x)$. Such crossover may be caused by the transverse motion of the stripes carrying superconducting currents which gives rise to the conventional superconductivity in CuO$_2$ planes. Although possibly a simple coincidence, it happens when the doping $x$ is equal to the effective charge $(n)$ in a stripe.
In conclusion we carried out Cu NQR studies of the Eu doped $\rm La_{2-x}Sr_xCuO_4$. We demonstrated that at 1.3K the ground state for moderate Sr content corresponds to the pinned stripe-phase and that the pinning is enhanced at the commensurability. Three nonequivalent copper positions in the CuO$_2$ planes were found. One of them with a magnetic moment of 0.29$\mu_B$ is related to the AF correlated antiphase domains. From the behaviour of the NQR frequencies it follows that the effective charge of the domain walls separating these domains is almost independent on the Sr content $x$. The onset of the bulk superconducty at larger $x$ correlates with the dramatic transformation of the NQR spectra, indicating the depinning of the stripe phase.
The authors are grateful to H.Brom, A.Egorov and N.Garifyanov for valuable help and discussions. This research was supported by the Deutsche Forschungsgemeinschaft. The work of G.T. was supported in part by the State HTSC Program of the Russian Ministry of Sciences (Grant No. 98001) and by the Russian Foundation for Basic Research (Grant No. 98-02-16582).
[9]{}
V.J.Emery and S.A.Kivelson, Physica $\bf{209C}$, 597 (1993); J.Phys. Chem. Solids, Vol. $\bf{59}$, 1705 (1998)
D.Poilblanc and T.M.Rice, Phys. Rev., B $\bf{39}$, 9749 (1989); J.Zaanen and O.Gunnarson, Phys. Rev., B $\bf{40}$, 7391 (1989); J.Zaanen, J. Phys. Chem. Solids, $\bf{59}$, 1769 (1998)
A.H.Castro Neto and D.Hone, Phys. Rev. Lett. $\bf{76}$, 2165 (1996)
J.M.Tranquada $\it{et\ al.}$, Nature $\bf{375}$, 561, (1995)
J.M.Tranquada, J. Phys. Chem. Solids, $\bf{59}$, 2150 (1998)
H.A.Mook $\it{et\ al.}$, Nature $\bf{395}$, 580, (1998)
H.P.Fong $\it{et\ al.}$, cond-mat/9902262
A.W.Hunt $\it{et\ al.}$, Phys. Rev. Lett. $\bf{82}$, 4300 (1999)
I.M.Abu-Shikah $\it{et\ al.}$, cond-mat/9906310
B.Büchner $\it{et\ al.}$, Physica $\bf{185-189C}$, 903 (1991); Europhys. Lett., $\bf{21}$, 953 (1993)
K.Yoshimura $\it{et\ al.}$, J.Phys. Soc. Jpn., $\bf{58}$, 3057 (1989)
T.Shimizu, J.Phys. Soc. Jpn., $\bf{62}$, 772 (1993)
R.L.Martin, Phys. Rev. Lett., $\bf{75}$, 744 (1995)
T.Tsuda $\it{et\ al.}$, J. Phys. Soc. Jpn., $\bf{57}$, 2908 (1988)
T.Imai $\it{et\ al.}$, Phys. Rev. Lett., $\bf{70}$, 1002 (1993)
G.M.Luke $\it{et\ al.}$, Hyp. Int., $\bf{105}$, 113 (1997)
M.R.McHenry, B.G.Silbernagel and J.H.Wernick, Phys. Rev. B $\bf{5}$, 2958, (1972)
G.B.Teitel’baum $\it{et\ al.}$, JETP Letters, $\bf{67}$, 363 (1998)
B.Büchner $\it{et\ al.}$, Phys. Rev. Lett., $\bf{73}$, 1841 (1994)
K.Yoshimura $\it{et\ al.}$, Hyp. Int., $\bf{79}$, 867 (1993); S.Oshugi $\it{et\ al.}$, J. Phys. Soc. Jpn., $\bf{63}$, 2057 (1994)
P.C.Hammel $\it{et\ al.}$, Phys. Rev. B $\bf{57}$, R712 (1998)
J.M.Tranquada $\it{et\ al.}$, Phys. Rev. Lett., $\bf{78}$, 338 (1997)
|
---
abstract: 'A brief overview is given of what we know of the baryon and meson spectra, with a focus on what are the key internal degrees of freedom and how these relate to strong coupling QCD. The challenges, experimental, theoretical and phenomenological, for the future are outlined, with particular reference to a program at Jefferson Lab to extract hadronic states in which glue unambiguously contributes to their quantum numbers.'
author:
- 'M.R. Pennington'
title: Understanding the baryon and meson spectra
---
[ address=[Theory Center, Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606, U.S.A.]{} ]{}
Revealing the workings of strong QCD
====================================
With eyes fixed on the wonders of the LHC at the TeV scale, one may question why is the physics of the strong interaction at 1 GeV of interest any longer. Is this not all ancient history? However, it is at the GeV scale that we already know the scalar sector that gives mass to most of the visible universe. A GeV is the energy scale at which we have discovered half the particles of a possible supersymmetric world. New strong interactions may await discovery, but QCD is the only strong interaction we already know. We should study it in as much detail as we can. After all it determines the properties of the nuclear matter of which we are made. It is the strength of this interaction that brings a complexity of phenomena that outshines those of perturbative electroweak physics. The richness of the tapestry of strong QCD is to be seen in the hadrons, their properties and structure, that it creates.
The paradigm for what can be learnt from spectroscopy is provided by atoms. Even if we did not have enough energy to separate electrons from the nucleus, we would know by studying the spectrum that though atoms are electrically neutral, they behave as though they are made of electrically charged objects held together by an electromagnetic force governed by the rules of Quantum Electrodynamics. In a similar way color neutral hadrons are built of constituents carrying color charge, bound by the rules of QCD. But what are these rules? While asymptotic freedom provides a well exploited simplification for hard scattering processes, it is the fact that over a distance of a fermi the interaction is strong that makes QCD so challenging and why we look to experiment for guidance on how it really works. Strong coupling confines quarks and breaks chiral symmetry, and so defines the world of light hadrons. Quark confinement is reflected in the spectrum and properties of hadrons, and we can learn from what experiment teaches about these. We ask: what are the internal degrees of freedom of hadron states? The quark model, that was of course the seed from which the idea of QCD first germinated, suggests these are constituent quarks (and anti-quarks). But is that all there are? What is the role of glue? Do gluons just stick the quarks together, and nothing more?
It is in the spectrum of charmonium that we have a working template from which to judge complexity most readily. Below ${\overline D}D$ threshold it all appears simple. We have the tightly bound systems of $J/\psi$, $\psi'$, $\eta_c \, \cdots$, as given by non-relativistic potential models. Above the open charm threshold, we once thought the (almost) stable charmonia are replaced by states with 1-50 MeV widths decaying to ${\overline D}D$, ${\overline D}D^*$, ${\overline D^*}D^*$, ${\overline D_s}D_s^*, \cdots$, as their mass increases. What we find is that the states predicted by potential models are shifted by tens of MeV themselves: the decays affect their dynamics [@barnes-swanson; @wilson]. Hadronic decay channels are an essential degree of freedom. These not only shift predominantly ${\overline c}c$ states, but generate states that would not have existed without these hadron channels. The first discovery of a state of this type is the $X(3872)$, whose very existence is tied to the dynamics of the ${\overline D}^0 D^{*0}$ channel [@tornqvist]. More new states, a string of $X,\,Y$ and $Z$ states perhaps only exist because of their hadronic decays, sometimes these channels binding in molecular (or multiquark) configurations. As dynamically coupled channel models have long suggested [@vanbev-lutz], hadrons and their decays are intimately related. Only for ground states may one think of them as having minimal quark configurations.
What are the degrees of freedom in each hadron?
===============================================
Baryons have a special place in the study of hadrons, as their structure is most obviously related to the color degree of freedom. While a color singlet quark-antiquark system is basically the same however many colors there are, the minimum number of quarks in a baryon is intimately tied to the number of colors. If $N_c$ were some other number than 3, the world would be quite different. Recognizing the flavor pattern of the ground state baryons was the key step in the development of the quark model. Consequently, this model with three independent quark degrees of freedom [@isgur; @capstick] has naturally served as the paradigm for what we expect the spectrum of excited baryons, both nucleons and $\Delta$’s, to look like too. While experiment has long confirmed the lower lying states, many of the heavier ones seemed to be [*missing*]{} above 1.6 GeV.
If baryons were diquark–quark systems, as noted more than 40 years ago [@lichtenberg], the number of states would be restricted and in fact be very like that observed uptil a year or so ago. However most of the early evidence on the baryon spectrum was accumulated from $\pi N$ scattering, and decays into the same channel. Perhaps the [*missing*]{} states are just [*dark*]{} in these channels, and shine” most in $\pi\pi N$ and $KY$. Consequently, the experimental program has concentrated more recently on these channels, which are an increasing part of the $\pi N$ total cross-section as the energy goes up.
![The imaginary parts of the $I=1/2$ $\pi N\to\pi N$ partial wave amplitudes, labeled by the quantum numbers $L_{2I\,2J}\,=\, D_{13}$ and $P_{11}$ from the SAID analysis [@said] as functions of the $\pi N$ c.m. energy, $E$. The arrows mark the real part of the resonance pole positions.](fig1){height=".24\textheight"}
But first how do we identify states in the spectrum of hadrons? Since states have definite quantum numbers, spin, parity, isospin etc, we have to decompose the observed data, integrated and differential cross-sections, into partial waves that specify these quantum numbers. To do this completely for processes with spin requires measurements with polarized beams and polarized targets. Having separated the partial waves, one finds it is only for the lowest mass state with a given quantum numbers that the partial wave looks anything like a simple Breit-Wigner resonance, see, as an example, the $D_{13}$ wave in Fig. 1 [@said]. Higher mass states are much less obvious. For instance in the $P_{11}$ wave of Fig. 1, while the $N^*(1440)$ (the Roper) appears as a bump in the imaginary part (and modulus), the higher mass $N^*(1710)$ can barely be seen in the same $\pi N\to\pi N$ amplitude. It is highly inelastic. A state in the spectrum is then only identifiable by its pole in the complex energy plane on some nearby unphysical sheet. It is the poles that are the universal outcome of any modern amplitude analysis, as recognized by the PDG [@pdg2012].
By now a vast amount of data has been accumulated, and is being accumulated, on a wide range of baryonic processes, most recently initiated by real and virtual photon beams. The presence of many decay channels and the large widths to each of these demands coupled-channel amplitude analyses be performed. This requires a rich supply of input data if the richness of the spectrum is to be exposed. Thus from JLab [@jlab-photo; @jlab-data] and from ELSA [@elsa-photo], we have thousands of data on $\gamma p\to \pi^0 p$ and $KY$, differential cross-sections and polarizations. These feature prominently in the latest analyses.
The most ambitious analysis is that by the Excited Baryon Analysis Center (EBAC) team led by Harry Lee [@ebac]. Not only does this fit a very wide range of data on baryonic channels, but it does this in terms of an effective field theory of hadronic interactions developed by Sato and Lee [@ebac]. Their calculational procedure ensures unitarity is fulfilled, and their Lagrangian provides a framework in which to consider the nature and structure of each resonant state, and its core” revealed. Bare” or core” states are those with no decays [@ebacpoles]. While for heavy quark systems one might reasonably define such bare states as those that arise in a potential model for charmonium or bottomonium, for light quark systems the model template is not so obvious. Here it is the Sato-Lee Lagrangian. How are such bare” states connected to QCD? In fact are these connected to QCD at all? Perhaps there is no limit of QCD in which the hadronic decays of bound states can be turned off. Notwithstanding such interpretations, the results for the $N^*$ and $\Delta^*$ spectra of EBAC up to 1800 MeV have now been finalized [@ebac2], and are shown in Fig. 2. Their analysis of the detailed nature of these states is to come.
![$N^*$ and $\Delta^*$ spectra, labeled by their spin and parity as $J^P$ along the abscissa, and the real part of the resonance pole positions along the ordinate, from the EBAC [@ebac2] and Bonn-Gatchina [@bn-ga] analyses. For the EBAC analysis all the states have $3^*-4^*$ provenance, while Bonn-Gatchina also include those with $1^*-2^*$ ratings, according to the legend shown. ](fig2bw){height=".47\textheight"}
A more computationally flexible amplitude analysis program has been carried out by the Bonn-Gatchina team [@sarantsev]. They fit an even more extensive range of multi-hadron final states and so are able to present results up to a higher energy [@bn-ga]. Their states up to 2.1 GeV are shown in Fig. 2 too, with their assignment of their 1-4 star confidence [@pdg2012]. The EBAC and Bonn-Gatchina spectra and couplings are very similar, but not identical. The larger mass range fitted includes the JLab data on channels such as $\gamma p\to K Y$ [@jlab-data] and this has enabled a number of the dark” or missing” baryons at last to be revealed, like the $1/2^+$ $N^*(1880)$ and the $3/2^+$ $N^*(1900)$ [@bn-ga].
Experiments on $\gamma p \to K^+ \Lambda$ with polarized beam and polarized target, together with the spin information from the weak decay $\Lambda \to \pi N$, allow more observables to be measured than the minimum needed to determine all the independent amplitudes (up to an overall phase) [@tiator; @sandorfi]. These [*over-complete*]{} experiments hold out the prospect of checking that the partial wave solution that results in the spectrum shown in Fig. 2 is indeed the correct one. The development of polarized targets, such as FROST and HDice at JLab [@jlab-targets], have allowed neutron scattering data to be determined too. These results are eagerly awaited as they are an essential component in securing the partial wave solution and its isospin decomposition.
Fig. 2 only shows the spectrum with zero strangeness. Within a simple quark model picture (which we have stressed may not be a realistic guide for highly excited states with their complex multi-hadron decays), baryons form flavor multiplets. Consequently, searching for baryons in the $\Sigma^*,\, \Lambda^*,\,\Xi^*$ families is a key part of the future experimental program. Such states have fewer (or better separated) hadronic decay channels and so may be narrower and more easily identifiable.
{height=".28\textheight"}
Such results will teach us the Fock space decomposition of each resonant state. All but the ground states are inevitably complicated. As an example, the Roper, the $N^*(1440)$, cannot just be a three quark state, as depicted in Fig. 3a. It must have an explicit $\pi N$ component in its Fock space, Fig. 3b, since it is through this component (amongst others) that it decays. Its Fock space might then be thought to include a nucleon and a pion (or even a multi-pion) cloud (Fig. 3c), but might also contain a pentaquark configuration, like that in Fig. 3d. Dynamical models, and eventually QCD, will tell us what are the proportion of these components for each physical state. Such compositions are also probed experimentally in photo-transition processes. Once the data on these from the final running of CEBAF at 6 GeV are analyzed appropriately in terms of pole quantities [@mokeev] we may have a better idea.
How is the spectrum of Fig. 2 related to QCD? The lattice provides the most consistent theoretical connection. The four-dimensional world is modeled as a discrete space-time to make the problem computationally feasible. The baryon spectrum computed most recently [@edwards] reveals a pattern very like that of the quark model: certainly not that of a pointlike diquark–quark system. The missing” states are there. However, one essential ingredient is clearly [*missing*]{} in such calculations. While continuum hadronic effects are included, they are not yet those of the physical world. Though great computational strides have been made, the [*up*]{} and [*down*]{} quark masses are 8-15 times their physical value and so the pion mass is still 3 or 4 times too heavy. Consequently, the Fock space decomposition of the excited baryons is not physical. In terms of the pictures in Fig. 3, components (b) and (c) are much much smaller than those of the real world, and so it’s perhaps not surprising that the quark model-like Fig. 3a dominates. However, calculational progress towards a 140 MeV pion mass continues.
A continuum approach to QCD with physical mass quarks is provided by the solution of the Schwinger-Dyson/Bethe-Salpeter (SD/BS) system of equations [@sdbsreviews]. There has been steady progress over decades in solving this complex system self-consistently. However, speedier computations are made possible by modeling the gluon by a simple contact interaction and presuming that baryons are bound states of a quark with an extended (not pointlike) diquark. Detailed calculations of the $N^*$ spectrum have then been made [@cdr]. These include no decays and so no hadronic components. Amusingly there is a bare” $P_{11}$ state that can be identified with the EBAC core” state [@ebacpoles]. The physical Roper is $\sim 500$ MeV lighter. As with the more ambitious SD/BS approach treating baryons as full three quark systems [@eichmann], these calculations must include decays if a meaningful comparison of excited states with experiment is to be achieved.
Mesons: is this where glue is to be found?
==========================================
We now turn to mesons, first in the quark model. The ${\overline q}q$ pair can have spin, $S_{qq}$, equal to 0 or 1. When combined with units of orbital angular momentum $L_{qq}$, they make a series of flavor multiplets, with each unit of $L_{qq}$ adding $\sim 700$ MeV of mass. The ground states with $L_{qq}=0$ have $J^{PC}\,=\,0^{-+}$ and $1^{--}$ quantum numbers. While the light pseudoscalars, being the Goldstone bosons of chiral symmetry breaking, have atypical dynamics, the vector multiplet gives the ideally mixed paradigm, replicated by the mesons with higher $J=L_{qq}+1$.
The scalar ${\overline q}q$ multiplet is part of the $L_{qq}=S_{qq}=1$ family. There are at least 19 scalars below 2 GeV [@pdg2012], far more than can fit into one nonet [@mrp-scalars]. It was Jaffe in his seminal work on multiquark states [@jaffe] that recognized that the scalars below 1 GeV might be tetraquark states, while the more conventional ${\overline q}q$ $\,0^{++}$ mesons would be up close to their $2^{++}$ companions around 1.3 GeV. Such an interpretation naturally explains how the isosinglet $f_0(980)$ and isotriplet $a_0(980)$ can be degenerate in mass and both couple strongly to ${\overline K}K$: each is a $\overline{sn}sn$ state, with $n = u, d$. However, recent studies [@menu2009; @wilson2], making use of the fine energy binning possible with BaBar data [@marco], have shown that the $f_0(980)$ is dominated by long range ${\overline K}K$ components, rather than a tighter bound tetraquark configuration. Similarly, the $\sigma$ and the $\kappa$ seem to be dominated by $\pi\pi$ and $\pi K$ components: their masses depending far more on their couplings to these channels than related to any simple quark mixing scheme. Indeed long ago, the dynamical calculation by van Beveren and Rupp [@vanbev] highlighted how scalar ${\overline q}q$ seeds up at 1.3 GeV can give rise to two multiplets of hadrons, when their strong couplings to di-meson channels are included: an explicit example of dynamical coupled channel effects.
![The isovector meson spectrum from the lattice calculations of Dudek [*et al.*]{} [@dudek] with $m_\pi\,=\,396$ MeV, arranged according to their $J^{PC}$ quantum numbers. Those found with ${\overline q}q$ operators are shown as black blocks, the size of which denote the statistical uncertainties. States from ${\overline q}qg$ operators are shown as grey blocks. Some of these have spin-exotic quantum numbers. These are shown on the right.](fig4bw){height=".45\textheight"}
Ever since the QCD Lagrangian was written down, it was recognized that there may exist hadrons with more complicated configurations than those of the simple quark model: states in which gluons pay a role in determining their quantum numbers. At first, calculations and experimental searches were for states made purely of glue. While many sightings were claimed, they never stood up to challenge [@mrp-lund]. Indeed, it was quickly realized that any meson made of glue ([*viz. glueballs*]{}) must couple to quarks in order to decay into pions and kaons, and so mixing with these quark configurations is inevitable and could easily be large. Thus in the scalar sector discussed above, several states between 1.3 and 1.8 GeV might have sizeable admixtures of glue, [*viz.*]{} $gg$, without any being predominantly a glueball. That is a detail of dynamics that we do not yet understand, except in unrealistically simple mixing schemes. Consequently, attention has turned to other meson quantum numbers than those of the vacuum.
Lattice computations of the ${\overline q}q$ spectrum are approaching a maturity that includes all the states we know of from experiment, as shown in first two columns of Fig. 4. There is displayed the results of the present state-of-the art computations for isovector mesons from Dudek [*et al.*]{} [@dudek]. By using an inventive and ingenious set of operators, they have also been able to compute the spectrum of states that are ${\overline q}qg$. The grey blocks in Fig. 4 denote these hybrid states. On the left are seen hybrids with conventional quantum numbers, where exciting [*glue*]{} is found to require an extra $\sim 800$ MeV of mass. In addition, states with spin-exotic quantum numbers appear on the right of Fig. 4. The lightest is that with $J^{PC}=1^{-+}$, as long had been expected. At a pion mass of 400 MeV, this hybrid is found to be up around 2 GeV. Of course, a real mass pion is expected to affect this: in general making it lighter and broader.
Possible states with $1^{-+}$ quantum numbers were claimed in a series of searches starting more than 35 years ago with GAMS [@gams], then (as shown in Fig. 5) BNL-E852 [@chung] and VES [@ves] ten years later. All find enhancements in the relevant partial wave. However, these signals only constitute a few percent of the integrated cross-section, and inevitably have $1^{-+}$ waves with sizeable uncertainties [@dzierba]. Consequently, these experiments were never able to show that the underlying partial waves were resonant with a pole in the complex energy plane. The phase variation observed was always rather weak.
![On the left is the $J^{PC}\,=\,1^{-+}$ signal from BNL-E852 data [@chung] on $\pi N\to (3\pi)N$. The grey histogram is the calculated leakage” into this channel from other partial waves. The enhancement at $\sim 1.4$ GeV is thereby explained [@dzierba], but leaves a clean $\sim 1.6$ GeV enhancement. The graph on the right displays the VES results [@ves] on $\,\eta\pi\,$ and $\,\eta'\pi\,$ production as a function of the di-meson mass in $\,\pi^- Be\,$ collsions at 28 GeV$/c$, again with enhancements at 1.4 and 1.6 GeV, respectively. Whether any of these is resonant is unclear.](fig5a "fig:"){height=".24\textheight"} ![On the left is the $J^{PC}\,=\,1^{-+}$ signal from BNL-E852 data [@chung] on $\pi N\to (3\pi)N$. The grey histogram is the calculated leakage” into this channel from other partial waves. The enhancement at $\sim 1.4$ GeV is thereby explained [@dzierba], but leaves a clean $\sim 1.6$ GeV enhancement. The graph on the right displays the VES results [@ves] on $\,\eta\pi\,$ and $\,\eta'\pi\,$ production as a function of the di-meson mass in $\,\pi^- Be\,$ collsions at 28 GeV$/c$, again with enhancements at 1.4 and 1.6 GeV, respectively. Whether any of these is resonant is unclear.](fig5b "fig:"){height=".245\textheight"}
A much more ambitious program is that of COMPASS@CERN. This studies multi-hadron production at small momentum transfers with a 192 GeV pion beam on nucleon and nuclear targets, in particular studying $\pi\eta'$ and $3\pi$ final states. The $\pi\eta'$ data show a significant broad enhancement in $1^{-+}$ waves around 1600 MeV, but with little relative phase variation compared with the reference $2^{++}$ wave with its pronounced (conventional ${\overline q}q$) $\,a_2(1320)$ signal [@compass1]. In the $3\pi$ channel, the first runs in 2004 showed a very crude enhancement in $1^{-+}$ waves, which was fitted to a Breit-Wigner form with doubtful significance [@compass0]. However, now COMPASS are studying 96 million events in the $3\pi$ channel. With these statistics, one has to have a good understanding of the reaction mechanisms involved: simple Pomeron exchange with possibly important Deck effect backgrounds. At last report the data require at least 52 partial waves to obtain a stable set. Only the dominant $2^{++}$ and $1^{+-}$ waves have been shown in talks. This meeting will elaborate more on this [@compass2]. However, further work is needed to establish that there really is a $1^{-+}$ hybrid to add to the spectrum of physical hadrons.
A complementary effort is underway at Jefferson Lab with the instalation of magnets to increase the CEBAF energy to 12 GeV, a photon beam line and new detectors. A prime motivation for this upgrade is the search for hybrid mesons in all their quantum numbers, $J^{PC}$ and flavor: not just $1^{-+}$, but the $0^{+-}$, $2^{+-}$, [*etc.*]{}, expected at higher mass (Fig. 4). GlueX is the new detector dedicated to studying multi-hadron final states created by an 11 GeV polarized photon beam on a proton target [@gluex]. This is due to start taking data in 2016. Statistics comparable to COMPASS are expected, [*i.e.*]{} $10^8$ events. With wonderful angular coverage, this should allow small partial waves to be disentangled. Complementary (and occasionally competing) data on the low multiplicity final states will be taken by the CLAS12 detector at JLab too.
The task of extracting small signals with certainty is a real challenge to experiment, phenomenology and theory. One most go beyond the simple isobar picture that was good enough, when one had even $10^4$ events. However, in the era of precision data one needs precision analyses too. This demands detailed knowledge of the reaction mechanisms involved, and importantly all the contributing final state interactions of $\pi$’s, $K$’s and $N$’s to be well-represented in terms of amplitudes that respect all the key properties of scattering theory. This requires a pooling of world expertise on partial wave analyses and $S$-matrix technology to ensure multichannel unitarity is fulfilled [@ASI]. We have to learn from the experience of EBAC, Bonn-Gatchina, COMPASS and others, working with all the relevant analysis and experimental groups in the world. This will not just underpin the effort at JLab, but the same technology is required for comprehensive analyses of BESIII, LHCb and PANDA data. Steps are under way to bring this together. It is only by such collective efforts that we can be sure that signals of hybrids at the few percent level can be reliably extracted, and the poles of the $S$-matrix determined. It is not enough to confirm some putative $\pi_1(1600)$ signal (suggested by VES and BNL-E852), we must find the whole multiplet structure. It is only then that we can know that such exotic” states are really hybrids of quarks and glue, and not states with additional ${\overline q}q$ pairs, or hadronic molecules. The flavor multiplet structure is the guide [@bali]. An understanding of the role of glue in QCD is the prize.
Unless some real surprises happen, these experiments are likely to be the last in light hadron spectroscopy. If we are going to claim a real understanding of the detailed consequences of confinement, we had better get this right. That is the challenge for the next 10-15 years.
It is pleasure to thank the CIPANP organizers, particularly Wim van Oers and Martin Comyn, for inviting me to give this talk. The work was authored in part by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
[9]{} T. Barnes and E. S. Swanson, Phys. Rev. C[**77**]{}, 055206 (2008).
M. R. Pennington and D. J. Wilson, Phys. Rev. D [**76**]{}, 077501 (2007) \[arXiv:0704.3384 \[hep-ph\]\]. See, [*for instance*]{}, N. A. Tornqvist, Phys. Lett. B[**590**]{}, 209 (2004).
See, [*for instance*]{}, E. van Beveren, C. Dullemond and T. A. Rijken, Z. Phys. C[**19**]{}, 275 (1983);
M. F. M. Lutz and E. E. Kolomeitsev, Nucl. Phys. A[**755**]{}, 29 (2005).
S. Capstick and N. Isgur, Phys. Rev. D [**34**]{}, 2809 (1986). S. Capstick and W. Roberts, Phys. Rev. D [**49**]{}, 4570 (1994) \[arXiv:nucl-th/9310030\]. D. B. Lichtenberg and L. J. Tassie, Phys. Rev. [**155**]{}, 1601 (1967);
D. B. Lichtenberg, L. J. Tassie and P. J. Keleman, Phys. Rev. [**167**]{}, 1535 (1968). R. A. Arndt, W. J. Briscoe, M. W. Paris, I. I. Strakovsky and R. L. Workman, Chin. Phys. C [**33**]{},
1063 (2009) \[arXiv:0906.3709 \[nucl-th\]\]. J. Beringer [*et al*]{}, J. Phys. G [**86**]{}, 010001 (2012). M. Aghasyan [*[et al.]{}*]{} \[CLAS\], Phys. Lett. B[**704**]{}, 397 (2011).
M. E. McCracken [*et al.*]{} \[CLAS\], Phys. Rev. C[**81**]{}, 025201 (2010);
B. Dey [*et al.*]{} \[CLAS\], Phys. Rev. C[**82**]{}, 025202 (2010). N. Sparks [*et al.*]{} \[CBELSA/TAPS\], Phys. Rev. C[**81**]{}, 065210 (2010);
V. Crede [*et al.*]{} \[CBELSA/TAPS\], Phys. Rev. C[**84**]{}, 055203 (2011). A. Matsuyama, T. Sato and T. S. Lee, Phys. Rept. [**439**]{}, 193 (2007) \[arXiv:nucl-th/0608051\]. H. Kamano, S. X. Nakamura, T. S. Lee and T. Sato \[EBAC\], Phys. Rev. C [**81**]{}, 065207 (2010) \[arXiv:1001.5083 \[nucl-th\]\]. H. Kamano and T.-S. H. Lee \[EBAC\], AIP Conf. Proc. [**1432**]{}, 74 (2012);
H. Kamano \[EBAC\], AIP Conf. Proc. [**1388**]{}, 396 (2011) \[arXiv:1103.2693 \[nucl-th\]\],
arXiv: 1206.3374 \[nucl-th\]. A. V. Sarantsev, Acta Phys. Polon. Supp. [**3**]{}, 891 (2010). A. V. Anisovich, R. Beck, E. Klempt, V. A. Nikonov, A. V. Sarantsev and U. Thoma,
Eur. Phys. J. A[**48**]{} 15 (2012) \[arXiv:1112.4937 \[hep-ph\]\], Eur. Phys. J. A[**48**]{} 88 (2012)
\[arXiv:1205.2255 \[nucl-th\]\], Phys. Lett. B[**711**]{}, 167 (2012) \[arXiv: 1116.6150 \[hep-ex\]\].
L. Tiator, AIP Conf. Proc. [**1432**]{}, 162 (2012). A. M. Sandorfi, S. Hoblit, H. Kamano and T. S. Lee, J. Phys. G [**38**]{}, 053001 (2011)
\[arXiv:1010.4555 \[nucl-th\]\].
see http://userweb.jlab.org/ keith/Frozen/Frozen.html
www.jlab.org/Hall-B/HDIce/talks/g14[\_]{}Lab[\_]{}Users[\_]{}mtg[\_]{}Jun5[\_]{}12.pdf
I. G. Aznauryan [*et al.*]{} \[CLAS\], Phys. Rev. C[**80**]{}, 055203 (2009);
V. I. Mokeev, I. G. Aznauryan and V. D. Burkert, arXiv:1109.1294 \[nucl-ex\];
I. G. Aznauryan, V. D. Burkert and V. I. Mokeev, AIP Conf. Proc. [**1432**]{}, 68 (2012),
\[arXiv:1108.1125 \[nucl-ex\]\]. R. G. Edwards, J. J. Dudek, D. G. Richards and S. J. Wallace, Phys. Rev. D[**84**]{}, 074508 (2011),
AIP Conf. Proc. [**1432**]{}, 33 (2012).
A. Bashir, L. Chang, I. C. Cloet, B. El-Bennich, Y.-X. Liu, C. D. Roberts and P. C. Tandy,
Commun. Theor. Phys. [**58**]{}, 79 (2012);
P. C. Tandy, AIP Conf.Proc. 1374 (2011) 139-144 \[arXiv:1011.5250 \[nucl-th\]\].
G. Eichmann, I. C. Cloet, R. Alkofer, A. Krassnigg and C. D. Roberts, Phys. Rev. C [**79**]{},
012202 (2009);
I. C. Cloet, C. D. Roberts and D. J. Wilson, AIP Conf. Proc. [**1388**]{}, 121 (2011). G. Eichmann, R. Alkofer, A. Krasnigg and D. Nicmorus, Phys. Rev. Lett. [**104**]{}, 201601 (2010);
H. Sanchis-Alepaz, G. Eichmann, S. Villalba-Chavez and R. Alkofer, Phys. Rev. D[**84**]{}, 096003 (2011). M. R. Pennington, AIP Conf. Proc. [**1257**]{}, 27 (2010) \[arXiv:1003.2549 \[hep-ph\]\]. R. L. Jaffe, Phys. Rev. D [**15**]{}, 267 (1977). M. R. Pennington, AIP Conf. Proc. [**1432**]{}, 176 (2012) \[arXiv:1109.3690 \[nucl-th\]\]. M. R. Pennington and D. J. Wilson, [*in preparation*]{}.
B. Aubert [*et al.*]{} \[BaBar\], Phys. Rev. D[**79**]{}, 032003 (2009);
P. del Amo Sanchez [*et al.*]{} \[BaBar\], Phys. Rev. D[**83**]{}, 052001 (2011). E. van Beveren, T. A. Rijken, K. Metzger, C. Dullemond, G. Rupp and J. E. Ribeiro, Z. Phys. C [**30**]{}, 615-620 (1986). M. R. Pennington, Glueballs: the naked truth" Proc. Workshop on Photon Interactions and Photon
Structure, Lund, Sweden, Sept. 1998 (ed. G. Jarlskog and T. Sjostrand; pub. Lund, 1999) pp. 313-328.
J. J. Dudek, R. G. Edwards, M. J. Peardon, D. G. Richards and C. E. Thomas,
Phys. Rev. D[**82**]{}, 034508 (2010).
D. Alde D [*et al*]{} \[GAMS\], Phys. Lett. [**205B**]{}, 397 (1988).
S-U. Chung [*et al.*]{} \[BNL-E852\], Phys. Rev. D[**60**]{}, 092001 (1999) \[hep-ex/9902003\]. G. M. Beladidze [*et al.*]{} \[VES\], Phys. Lett. B[**313**]{}, 276 (1993). A. R. Dzierba [*et al.*]{}, Phys. Rev. D[**67**]{}, 094015 (2003). B. Grube \[COMPASS\], PoS HQL2010,034 (2011) \[arXiv: 1011.6615\[hep-ex\]\];
F. Nerling \[COMPASS\], PoS EPS-HEP2011, 303 (2011) \[arXiv: 1111.0259 \[hep-ex\]\]. M. G. Alekseev [*et al.*]{} \[COMPASS\], Phys. Rev. Lett. [**104**]{} 241803 (2010) \[arXiv:1001.4654\[hep-ex\]\]. F. Haas \[COMPASS\], [*these proceedings*]{}. See: http://www.gluex.org See, [*for instance*]{}, lectures at the Jefferson Lab Advanced Study Institute on [*Techniques for Amplitude*]{}
[*Analysis*]{}, Williamsburg, June 2012, http://www.jlab.org/conferences/asi2012 To answer a question from Gunnar Bali at this conference.
|
---
abstract: 'First principles total-energy pseudopotential calculations have been performed to investigate STM images of the (110) cross-sectional surface of Mn-doped GaAs. We have considered configurations with Mn in interstitial positions in the uppermost surface layers with Mn surrounded by As (Int$_{As}$) or Ga (Int$_{Ga}$) atoms. The introduction of Mn on the GaAs(110) surface results in a strong local distortions in the underlying crystal lattice, with variations of interatomic distances up to 3% with respect to unrelaxed ones. In both cases, the surface electronic structure is half-metallic (or *nearly* half metallic) and it strongly depends on the local Mn environment. The nearby Mn atoms show an induced spin-polarization resulting in a ferromagnetic Mn–As and antiferromagnetic Mn–Ga configuration. The simulation of the STM images show very different pattern of the imaged Mn atom, suggesting that they could be easily discerned by STM analysis.'
author:
- 'A. Stroppa'
title: '**Structural, electronic and magnetic properties of Mn-doped GaAs(110) surface**'
---
Introduction
============
The easy integration of ferromagnetism with semiconducting properties in the same host material provided by Diluted Magnetic Semiconductors (DMS’s) has been considered an important breakthrough in the semiconductor microelectronics. This is mainly due to the unprecedented opportunity to create a new class of device which would combine the spin degree of freedom to process, to transfer as well as to store information. *Spintronics* is the emergent technology which exploits the quantum propensity of electrons to spin as well as making use of their charge state.[@Spintr1; @Spintr2; @Spintr3]
The discovery of ferromagnetism in Mn-doped GaAs semiconductor has become a milestone in spintronic revolution: Mn$_{x}$Ga$_{1-x}$As alloys are directly related to the existing GaAs technology, resulting in the practical realization of device structures combining ferromagnetic and nonmagnetic layers.[@reviewOhno]
There are several possibilities for a single Mn to be incorporated in the GaAs. It can occupy or the cation site (substitutional Mn, Mn$_{Ga}$) either the anion site (As antisite, Mn$_{As}$); it can also occupy interstitial sites, as reported by K.M. Yu *et al.* [@interst5] Further, other structural defects could be present in the alloy, such as As antisite (As$_{Ga}$). The fraction of Mn dopants occupying one or another location depend on the growth conditions and techniques .[@defectth]
The Curie temperature (T$_{c}$) is a key parameter in designing room-temperature spintronic devices. The highest T$_{c}$ reachable for Mn$_{x}$Ga$_{1-x}$As up to few years ago was 110 K,[@reviewOhno] i.e. rather low for practical technological purposes. It has been shown that interstitial Mn atoms have a crucial role in magnetic properties of the samples.[@annea1; @PRLMN] An intense experimental and theoretical efforts have been pursued in the last years in order to understand the physics of this material and how to raise the Curie temperature.
Nowadays, a new method has been proposed as alternative to the growth by Molecular Beam Epitaxy (MBE) of bulk Mn$_{x}$Ga$_{1-x}$As random alloy: the dopant atoms are incorporated in the sample in such a way to give rise to a Dirac’s $\delta$ function concentration profile (with locally high dopant concentration) along the grow direction ($\delta$-doping).[@Delta] Remarkably, an important enhancement of T$_{c}$ is obtained in these $\delta$-doped samples[@HighTC] (the highest T$_{c}$ obtained so far with $\delta$-doped sample is 250 K).[@HighTC] Very recently, Mn $\delta$-doped GaAs samples in (001) direction have also been grown at TASC Laboratory in Trieste.[@Modesti]
Therefore clarifying the site geometry and the local environment of impurities in $\delta$-doped GaAs:Mn should shed light on the understanding and the optimization of the magnetic properties of the system. From the experimental point of view, this study can be pursued with cross-sectional Scanning Tunneling Microscopy (XSTM): the Mn-doped GaAs samples are cleaved along the natural (110) cleavage plane and then analyzed by STM microscopy.
In recent years, several XSTM studies on Mn$_{x}$Ga$_{1-x}$As alloys have been performed but the local environment (and preferential geometric site) of defects has not been clarified yet.[@Mikkelsen; @Sullivan; @yakunin1; @yakunin2] From the theoretical point of view, the existing simulated XSTM images have mainly focused on the characterization of substitutional impurities on uppermost surface layers, while a complete and detailed investigation of interstitial impurity on uppermost surface layers is still lacking thus preventing the possibility of a full interpretation of the new XSTM images acquired.
Therefore, stimulated by the recent growth and following XSTM analysis of Mn $\delta$-doped GaAs samples at TASC,[@Modesti] we have performed density functional calculations to investigate the structural, electronic and magnetic properties of a single Mn dopant, by focusing our attention on the impurity *interstitial* surface configurations. We have also simulated the corresponding STM images.
This paper is organized as follows: in the next section we describe the computational method; in Sect. 3 we present our results for the structural, electronic and magnetic properties; in Sect. 4 we discuss our results for the XSTM images; finally, in Sect. 5 we draw our conclusions.
Computational details
=====================
Our study has been performed within Density Functional Theory (DFT) framework in the Local Spin Density Approximation (LSDA) for the exchange-correlation (XC) functional by using state-of-the-art first-principles pseudopotential self-consistent calculations, as implemented in the ESPRESSO/PWscf code.[@pwscf] We used the scheme of Ceperley and Adler[@PZ] (with the parametrization of Perdew and Zunger[@PZ1]) for XC functional. Mn atom is described by an ultrasoft (US) pseudopotential (PP)[@Vander] while norm-conserving PPs have been considered for Ga, As and H atoms.
Test calculations have shown that a kinetic energy cutoff for the wave functions equal to 22 Ry and a 200 Ry cutoff for the charge density are sufficient to get well converged results. We estimate the numerical uncertainty to be $\sim$ 0.01 Å for relative atomic displacements and $\sim$ 0.02 $\mu_{B}$ for the magnetic moments. The relaxed internal atomic positions have been obtained by total-energy and atomic-force minimization using the Hellmann-Feynman theorem.[@forces]
The surface is modelled with periodically repeated cell containing one Mn atom; a (110) slab geometry with a 4$\times4$ in-plane periodicity has been used. The simulation cells are made up of 5 atomic layers and a vacuum region equivalent to 8 atomic layers. The bottom layer has been passivated with Hydrogen atoms in order to simulate semi-infinite bulk material.[@passivation] In the energy minimization only the three uppermost layers are allowed to relax, while the others are considered bulk-like.
Two different configurations have been considered for Mn on the surface, namely Int$_{As(Ga)}$ with As (Ga) atoms as nearest neighbor atoms. In each case, the distances between the Mn atom and its periodic image on the (110) plane are 15.7 Å along the \[1$\bar{1}$0\] and 22.2 Å along \[001\].
XSTM images are obtained within Tersoff-Hamann model,[@Ters2] where the constant current STM images are simulated from electronic structure calculations by considering surfaces of constant integrated local density of states.
Structural, electronic and magnetic properties
==============================================
Structural properties
---------------------
The GaAs(110) surface is well known from experimental as well as theoretical point of view.[@report] In fig.\[Fig1\], we show a ball and stick model of the clean surface, side and top views. The surface unit cell in shown in the top view. In this and the other figs., black spheres are cations (Ga atoms), grey spheres are anions (As atoms).
At the top layer, the Ga surface atoms relax inward while the As atoms are shifted above the surface. Due to overbinding in the LDA approximation, our theoretical GaAs lattice constant (5.55 Å) is smaller than the experimental one (5.65 Å) but the relevant calculated structural parameters for the clean surface such as $\Delta_{1,\bot}$ (relative displacement of the anion and cation positions in the uppermost layer, normal to the surface) and $\alpha$ (the buckling angle), shown in fig.\[Fig1\], are 0.68 [Å]{} and 30.36$^{\circ}$ respectively, which well compare with the experimental values 0.65$\pm$0.03 Å and 27.4$^{\circ}$[@report; @exp1] and other theoretical works.[@Miotto; @oth1; @oth2]
In zinc-blende bulk crystal there are two inequivalent tetrahedral interstitial positions for Mn which differ in their local environment: we denote them as Int$_{As}$ or Int$_{Ga}$ according whether Mn is surrounded by As or Ga atoms respectively. There is also an hexagonal interstitial position where Mn is surrounded by *both* As and Ga atoms. In fig.\[Fig2\] we show the different cases. The tetrahedral interstitial site in the ideal geometry has four nearest-neighbor (NN) atoms at a distance equal to the ideal host bond length $d_{1}$ and six next-nearest-neighbor (NNN) atoms at the distance $d_{2}=\frac{2}{\sqrt 3}d_{1}$, which are Ga(As) atoms for Int$_{As(Ga)}$, respectively. In the hexagonal interstitial position the Mn is surrounded by 3 As and 3 Ga atoms at distance $\sqrt{\frac{11}{12}}d_{1}$. Throughout this work we have considered only *tetrahedral* interstitial position (the total energy corresponding to the *hexagonal* interstitial site is higher by more than 0.5 eV).[@Maka1; @Maka2; @PRLMN; @condmat]
In fig. \[Fig3\] we show a ball and stick side (a) and top (b) view of the relaxed Int$_{As}$, Int$_{Ga}$ configurations. Only the three topmost layers and the atoms closest to Mn are shown. Black spheres are cations (Ga atoms), grey spheres are anions (As atoms); Mn is explicitly indicated. In the relaxed structure, due to symmetry breaking because of the surface and the consequent buckling of the outermost surface layers, the NN and NNN bond lengths are no longer equal. Furthermore, some relaxed NNs bond lengths turn out to be longer than NNNs ones. In the following, we do not longer distinguish among NN and NNN (they are referred simply as NN atoms) but we simply refer to *surface* and *subsurface* atoms, as shown in the figure.
The two relaxed configurations differ in energy by $\sim$ 130 meV/Mn atom (Int$_{Ga}$ is favoured). This is in contrast to the bulk case, where it has been found that they differ only by $\sim 5$ meV/Mn[@bulkint] and Int$_{As}$ instead is slightly favored. We have tested the reliability of our final relaxed interstitial configurations by considering different starting geometries (details in Ref. ), other than the simple ideal (110) truncated bulk. In all cases, the final relaxed configuration is the same.
The atoms with the most sizeable displacements from the ideal zinc blende positions are the Mn impurities and their neighbors, on surface or subsurface. In Tab. \[tab1\] we report the inward/outward relaxations respect to the ideal (110) surface plane.
In Int$_{As}$, Mn relaxes outward by $\sim$ 0.06 Å and As$_{surf}$ (As$_{subsurf}$) move upwards (downwards). On the other hand, the Ga atoms (both on surface and subsurface) are shifted towards the bulk.
In Int$_{Ga}$, Mn relaxes inward by $\sim$ 0.32 Å; the Ga$_{surf}$ and Ga$_{subsurf}$ atoms are displaced downwards while the As$_{surf}$ (As$_{subsurf}$) atom moves upwards (downwards). In summary, both in Int$_{Ga}$ and Int$_{As}$, cations (surface and subsurface) close to Mn move downwards, while anions upwards or downwards according whether they are on surface or subsurface. The net result is a local reduction of the surface buckling with respect to the clean unperturbated surface, more than 30 % and 40 % for Int$_{As}$ and Int$_{Ga}$ respectively, with a net local buckling of about 0.46 Å for Int$_{As}$ and 0.40 Å for Int$_{Ga}$. As far as the interatomic distances between Mn and the nearest atoms are concerned (Tab. \[tab1\]), they are in general longer than the ideal bulk value by $\sim$ 2-3 %; the distances between Mn and more distant atoms are shorter than the bulk cases, except for Ga$_{subsurf}$ in Int$_{As}$, as it can be seen in Tab. \[tab1\].
Electronic properties
---------------------
In fig. \[Fig4\], we show the Density of States projected onto surface layer (PDOS); the continuous lines refer to Int$_{As}$ or Int$_{Ga}$ while the dashed lines refer to the clean GaAs (110) surface. DOS for Int$_{As}$(Int$_{Ga}$) are shown to the left (right) side; the Fermi level (E$_{f}$) is set to zero eV. The $d$ Mn projected DOS is also shown (grey area). The positive and negative DOS correspond to spin-up and spin-down components. First of all, in both Int$_{As}$ and Int$_{Ga}$, the DOS curves for Int$_{As}$ and Int$_{Ga}$ are very close to those corresponding to the clean surface case, but they differ in the energy region around E$_{f}$. An energy gap around E$_{f}$ is present in both majority and minority DOS. In Int$_{As}$ the majority and minority spin gaps overlap and almost coincide, maintaining the surface semiconducting with a gap of about $\sim$ 0.2 eV. In Int$_{Ga}$, instead, majority and minority spin gaps are quite different: $\sim$ 0.3 eV for the majority component and $\sim$ 0.1 eV for the minority component. The perturbation is weak on the valence band and stronger on the conduction band. The main difference between Int$_{As}$ and Int$_{Ga}$ DOS curve concerns a peak in the minority component in Int$_{Ga}$ around the Fermi energy (in Int$_{As}$ it is shifted by 0.3-0.4 eV below the Fermi energy) which reduces the gap in Int$_{Ga}$.
In both systems, the Fermi level lies in the lower tail of the conduction band thus indicating that interstitial Mn impurity behaves as a donor, like in the bulk case.[@Maka2]
At variance with the bulk case, where the calculated DOS for the two tetrahedral interstitial positions are almost the same,[@Maka2] thus indicating a week influence of the nearest neighbors on the interstitial Mn in the two configurations, the difference between surface Int$_{Ga}$ and Int$_{As}$ cases is more sizeable, indicating a stronger effect of the local environment.
The PDOS almost recover the bulk features already in the second layer (not shown in fig. \[Fig4\]). Therefore, the introduction of Mn results in a perturbation of the electronic properties mostly localized on the first layer and strongly depending on the local environment.
As far as the $d$ states are concerned, we observe that their contribution to the occupied majority spin component is by far larger than their contribution to the minority spin. However, their overall weight in the GaMnAs system is negligible and the valence band is in practise almost non spin-polarized (as observed above). In both cases, the Mn spin-up $d$ states are occupied and quite similar in shape while the spin-down $d$ states are almost unoccupied and they have a different shape, especially around the Fermi level.
In conclusion, the two Mn local environment give rise to a quite different surface electronic structure, with the differences mainly localized around the Fermi level.
Magnetic properties {#magnetism}
-------------------
In the following, we analyze the magnetic properties. The total and absolute magnetization in the supercell are different in the two configurations. They are equal to 4.23 and 4.84 $\mu_{B}$ in Int$_{As}$ and to 3.41 and 4.71 $\mu_{B}$ in Int$_{Ga}$. The difference between total and absolute magnetization corresponds to the presence of region of negative spin-density in the unit cell; this difference is higher in Int$_{Ga}$ than in Int$_{As}$, suggesting higher (absolute) values and/or more extended region of negative spin-density in the former than in the latter. It also justifies the smaller total magnetization of Int$_{Ga}$ with respect to Int$_{As}$. This is a clear evidence that the induced magnetization is strongly influenced by the local Mn environment.
Interesting information can be gained by looking at the individual atomic magnetic moments obtained as the difference between the calculated majority and minority Lowedin charges.[@Lowedin] The results have been reported elsewhere.[@stroppaperessi] The highest value of Mn spin-polarization is found in Int$_{As}$ (3.96 $\mu_{B}$) while it is slightly lower in Int$_{Ga}$ (3.67 $\mu_{B}$). The Mn magnetic in Int$_{As}$ is almost integer in agrement with the existence of a clear gap in the Mn-projected DOS and the unoccupied states just cutting the Fermi energy. It is worth noting that our calculated Mn magnetic moments are larger than those corresponding to the Interstitial Mn in the bulk and they are rather close to the value indicated for ferromagnetically coupled substitutional Mn impurities on the Ga sublattice in bulk GaAs. In fact, ab–initio calculations[@Maka1; @Sanyal; @Wu] report a Mn magnetic moment for *bulk* Int$_{As}$ equal to 2.70 $\mu_{B}$. A recent experimental work[@Gambardella] show that Mn impurities on GaAs(110) surfaces have magnetic moments significantly larger compared to the bulk case. The experimental and theoretical results would suggest in general an enhancement of the Mn magnetic moments due to surface effects. Our calculations, compared with previous bulk DFT studies,[@Maka1; @Sanyal] support this indications.
For Int$_{As}$, the As$_{surf}$ and As$_{subsurf}$ atoms have a ferromagnetic coupling to Mn, with a small magnetic moment equal to 0.05 $\mu_{B}$. The induced polarization in more distant As atoms is totally negligible. The Ga$_{surf}$ atoms couple antiferromagnetically with Mn with an induced polarization on it equal to -0.14 $\mu_{B}$. Other atomic moments are negligible.
As far as the Int$_{Ga}$ configuration is concerned, a negative magnetic moment is induced on Ga$_{surf}$ (-0.17 $\mu_{B}$) while the Ga$_{subsurf}$ atoms have a negligible polarization. The As$_{surf}$ shows only a negligible polarization, while it is positive and equal to 0.05 $\mu_{B}$ for As$_{subsurf}$.
Our results for the magnetic properties can be summarized as follows: in both cases, the cations couple antiferromagnetically to Mn spin moment while anions couple ferromagnetically. Furthermore, only surface cations are spin polarized, while both surface and subsurface anions do polarize.
STM Images
==========
In fig.\[Fig5\] we show the schematic front and side views of the relaxed underlying structure lattice and the XSTM images, for empty states at a reference positive bias voltages ($+$2.0 V). In Int$_{Ga}$, the two NN surface Ga atoms of Mn appear very bright with features extending towards the Mn and the atoms in the neighbourhood also look brighter than normal. For Int$_{As}$, a very bright elongated spot in the center of the surface unit cell delimited by As is visible. We would like to point out that the simulated XSTM images have clearly different shape for the two geometric configurations, so the two different local coordination should be distinguished by STM analysis. Further, the simulated STM images for Int$_{As}$ case well compare with experimental XSTM images of the $\delta$-doped samples.[@Modesti]
Conclusion
==========
In summary, we have used first-principles simulations to characterize Mn interstitial impurity on the GaAs (110) surface. Strong local distortion on the (110) GaAs surface are introduced by Mn, especially when it is surrounded by Ga atoms. In both case, Mn polarizes the NN and NNN atoms, giving rise to a ferromagnetic Mn–As and to an antiferromagnetic Mn–Ga configuration. The simulated STM images show very different shape of the imaged Mn atom, suggesting that two configuration can be clearly differentiated by STM analysis. Finally, recent experimental STM images are qualitatively similar to our simulated one for Int$_{As}$ configuration, suggesting the possible identification of Mn interstitials surrounded by As atoms in the experimental samples.[@Modesti]
Acknowledgments
===============
The author would like to thank: S. Modesti, D. Furlanetto and X. Duan for fruitful discussions; A. Debernardi for providing me his pseudopotential for Manganese; Computational resources have been obtained partly within the “Iniziativa Trasversale di Calcolo Parallelo” of the Italian [*Istituto Nazionale per la Fisica della Materia*]{} (INFM) and partly within the agreement between the University of Trieste and the Consorzio Interuniversitario CINECA (Italy). All the ball-and-stick figures presented here have been generated by using the Xcrysden Package[@Xcrysden].
[99]{} SHARMA P.: *Science*, [**307**]{}, (2005) 531. COVINGTON M.: *Science*, [**307**]{}, (2005) 215. OHNO Y., YOUNG D.K., BESCHOTEN B., MATSUKURA F., OHNO H. AND AWSCHALOM D.D.: *Nature*, [**402**]{}, (1999) 790. OHNO H., MATSUKURA F. AND OHNO Y.: *Mater. Sci. Eng. B*, [**84**]{}, (2001) 70. YU K.M., WALUKIEWICZ W., WOJTOWICZ T., KURYLISZYN I., LIU X., SASAKI Y. AND FURDYNA J.K.: *Phys. Rev. B*, [**65**]{}, (2002) 201303. BERGQVIST L., KORZHAVYI P. A., SANYAL B., MIRBT S., ABRIKOSOV I.A., NORDSTRÖM L., SMIRNOVA E.A., MOHN P., SVEDLINDH P., AND ERIKSSON O.: *Phys. Rev. B*, [**67**]{}, (2003) 205201. EDMONDS K.W., WANG K.Y., CAMPION R.P., NEUMANN A.C., FARLEY N.R.S., GALLAGHER B.L., FOXON C.T.: *Appl. Phys. Lett.*, [**81**]{}, (2002) 4991. EDMONDS K.W., BOGUSLAWSKI P., WANG K.Y., CAMPION R.P., NOVIKOV S.N., FARLEY N.R.S., GALLAGHER B.L., FOXON C.T., SAWICKI M., DIETL T., NARDELLI M.B., AND BERNHOLC J.: *Phys. Rev. Lett.*, [**92**]{}, (2004) 37201. NAZMUL A.M., SUGAHARA S. AND TANAKA M.: *Phys. Rev. B*, [**67**]{}, (2003) R241308. NAZMUL A.M., AMEMIYA T., SHUTO Y.,SUGAHARA S. AND TANAKA M., *Phys. Rev. Lett.*, [**95**]{}, (2005) 017201. MODESTI S. AND FURLANETTO D., personal communication. MIKKELSEN A., SANYAL B., SADOWSKI J., OUATTARA L., KANSKI J., MIRBT S., ERIKSSON O. AND LUNDGREN E.: *Phys. Rev. B*, [**70**]{}, (2004) 85411. SULLIVAN J.M., BOISHIN G.I., WHITMAN L.J., HANBICKI A.T., JONKER B.T. AND ERWIN S.C.: *Phys. Rev. B*, [**68**]{}, (2003) 235324. YAKUNIN A.M., SILOV A.Y., KOENRAAD P.M., WOLTER J.H., VAN ROY W., DE BOECK J., TANG J.M., FLATTÉ M.E.: *Phys. Rev. Lett.*, [**92**]{}, (2004) 216806.
YAKUNIN A.M., SILOV A.Y., KOENRAAD P.M., TANG J.-M, FLATTÉ M.E., VAN ROY W., DE BOECK J., WOLTER J.H.: cond-mat/0505536 Preprint, 2005. BARONI S., DAL CORSO A., DE GIRONCOLI S. AND GIANNOZZI P., CAVAZZONI C.: http://www.pwscf.org. CEPERLY D.M. AND ADLER B.J.: *Phys. Rev. Lett.*, [**45**]{}, (1980) 566. PERDEW J. AND ZUNGER A.: *Phys. Rev. B*, [**23**]{} (1981) 5048. VANDERBILT D.H.: *Phys. Rev. B*, [**41**]{}, (1990) 7892. For the optimation of atomic positions we require Hellmann-Feynman forces smaller then 0.02 eVÅ$^{-1}$. OW KING N. AND WANG X.W.: *Phys. Rev. B* [**54**]{}, (1996) 17661. TERSOFF J. AND HAMANN D.: *Phys. Rev. B*, [**31**]{} (1985) 805. EBERT P.: *Surf. Science Reports* [**33**]{} (1999) 121. FORD W.K., GUO T., LESSOR D.L., AND DUKE C.B.: *Phys. Rev. B* [**42**]{}, (1990) 8952. MIOTTO R., SRIVASTAVA G.P. AND FERRAZ A.C.: *Phys. Rev. B* [**59**]{}, (1999) 3008. FERRAZ A.C. AND SRIVASTAVA G.P.: *Surf. Sci.* [**182**]{}, (1987) 161. UMERSKI A. AND SRIVASTAVA G.P.: *Phys. Rev. B* [**51**]{}, (1995) 2334. MÁCA F. AND MAŠEK J.: *Phys. Rev. B*, [**65**]{}, (2002) 235209. MAŠEK J. AND MÁCA F.: *Phys. Rev. B*, [**69**]{}, (2004) 165212. CAO J.X., GONG X.G., WU R.Q.: *Phys. Rev. B*, [**72**]{}, (2005) 153410. MAŠEK J., KUDRNOVSKÝ J. AND MÁCA F.: *Phys. Rev. B*, [**67**]{} (2003) 153203. STROPPA A. AND PERESSI M.: *Materials Science and Engineering B*, in press. LOEWDIN P.O.: *J. Chem. Phys.*, [**18**]{}, (1950) 365. STROPPA A. and PERESSI M., in preparation. SANYAL B. AND MIRBT S.: *J. Magn. Magn. Mat.*,[**290-291**]{}, (2005) 1408. WU R.: *Phys. Rev. Lett.*, [**94**]{}, (2005) 207201. GAMBARDELLA P., BRUNE H., DHESI S.S., BENCOK O., KRISHNAKUMAR S.R., GARDONIO S., VERONESE M., GRAZIOLI C. AND CARBONE C.: *Phys. Rev. B*, [**72**]{}, (2005) 45337.
KOKALJ A.: *Comp. Mater. Sci.*, [**28**]{}, (2003) 155. Code available from http://www.xcrysden.org/.
[||c|c|c|c||]{}
\
\
As$_{surf}$&As$_{subsurf}$&Ga$_{surf}$&Ga$_{subsurf}$\
+0.15 &-0.19 & -0.06&-0.06\
2.52(2.40)&2.44(2.40)&2.49(2.78)&2.90(2.78)\
\
Ga$_{surf}$ &Ga$_{subsurf}$ &As$_{surf}$ &As$_{subsurf}$\
-0.22 &-0.24&+0.06&-0.10\
2.48(2.40) &2.56(2.40)&2.68(2.78)&2.63(2.78)\
![Schematic side and top view of the clean GaAs(110) surface. Only the three topmost layers (1$^{st}$ layer is the surface layer) are shown in the Figure. In this and other figures, black spheres are cations (Ga atoms), grey spheres are anions (As atoms).[]{data-label="Fig1"}](./clean.ps)
![Conventional bulk unit cells representing Mn atom in tetrahedral-interstitial configurations, surrounded by As atoms (grey spheres) as nearest neighbors (top part, to the left) and by Ga atoms (black spheres) as nearest neighbors (top part, to the right). Bottom part: hexagonal interstitial position with Mn surrounded by 3 As and 3 Ga as nearest neighbors.[]{data-label="Fig2"}](./interstitials.ps)
![Schematic side and top view of the relaxed Int$_{As}$ and Int$_{Ga}$ configurations. Mn is explicitly shown.[]{data-label="Fig3"}](./relaxations.ps)
![Density of States (DOS) projected into surface (continuous line) layer for Int$_{As}$ (to the left) and Int$_{Ga}$ (to the right). Dashed line corresponds to the DOS for the clean surface. Mn projected DOS is also shown (grey filled area). The Fermi level is set to zero eV.[]{data-label="Fig4"}](./cimento.ps)
![Simulated STM images of isolated Mn interstitial in GaAs(110) surface, with As NNs (to the left) and Ga NNs (to the right). Top panels: ball-and-stick model of the relaxed surface, top and side view (Ga: black spheres, As: grey spheres). Bottom panel: simulated STM images for positive bias voltage. The intersection of the dotted lines locates the position of Mn (projected on the (110) plane).[]{data-label="Fig5"}](./stm.ps)
|
---
abstract: 'Self-reinforcing feedback loops in personalization systems are typically caused by users choosing from a limited set of alternatives presented systematically based on previous choices. We propose a Bayesian choice model built on Luce axioms that explicitly accounts for users’ limited exposure to alternatives. Our model is fair—it does not impose negative bias towards unpresented alternatives, and practical—preference estimates are accurately inferred upon observing a small number of interactions. It also allows efficient sampling, leading to a straightforward online presentation mechanism based on Thompson sampling. Our approach achieves low regret in learning to present upon exploration of only a small fraction of possible presentations. The proposed structure can be reused as a building block in interactive systems, e.g., recommender systems, free of feedback loops.'
author:
- |
Gökhan Çapan\
`gokhan.capan@boun.edu.tr` İlker Gündoğdu\
`ilker.gundogdu@boun.edu.tr` Ali Caner Türkmen\
`caner.turkmen@boun.edu.tr` Çağri Sofuoğlu\
`cagri.sofuoglu@boun.edu.tr` Ali Taylan Cemgil\
`taylan.cemgil@boun.edu.tr`\
Department of Computer Engineering, Boğaziçi University, Istanbul, Turkey
bibliography:
- 'bibl.bib'
title: |
A Bayesian Choice Model\
for Eliminating Feedback Loops
---
|
[Some remarks regarding finite bounded commutative BCK-algebras]{}
$$$$
Cristina Flaut, Šárka Hošková-Mayerová and Radu Vasile$$$$
**Abstract.** [ In this chapter, starting from some results obtained in the papers \[FV; 19\], \[FHSV; 19\], we provide some examples of finite bounded commutative BCK-algebras, using the Wajsberg algebra associated to a bounded commutative BCK-algebra. This method is an alternative to the Iseki’s construction, since by Iseki’s extension some properties of the obtained algebras are lost.]{}
$$$$
**Keywords:** Bounded commutative BCK-algebras, MV-algebras, Wajsberg algebras.**AMS Classification:** 06F35, 06F99.[ ]{} $$$$
**1.** **Introduction**$$$$
BCK-algebras were first introduced in mathematics in 1966 by Y. Imai and K. Iseki, through the paper \[II; 66\]. These algebras were presented as a generalization of the concept of set-theoretic difference and propositional calculi. The class of BCK-algebras is a proper subclass of the class of BCI-algebras.
**Definition 1.1.** An algebra $(X,\ast ,\theta )$ of type $(2,0)$ is called a *BCI-algebra* if the following conditions are fulfilled:
$1)~((x\ast y)\ast (x\ast z))\ast (z\ast y)=\theta ,$ for all $x,y,z\in X;$
$2)~(x\ast (x\ast y))\ast y=\theta ,$ for all $x,y\in X;$
$3)~x\ast x=\theta ,$ for all $x\in X$;
$4)$ For all $x,y,z\in X$ such that $x\ast y=\theta ,y\ast x=\theta ,$ it results $x=y$.
If a BCI-algebra $X$ satisfies the following identity:
$5)$ $\theta \ast x=\theta ,~$for all $x\in X,$ then $X$ is called a *BCK-algebra*.
**Definition 1.2.** i) A BCK-algebra $(X,\ast ,\theta )$ is called *commutative* if $$x\ast (x\ast y)=y\ast (y\ast x),$$for all $x,y\in X$ and *implicative* if $$x\ast (y\ast x)=x,$$for all $x,y\in X.$
ii\) (\[Du; 99\]) A BCK-algebra $(X,\ast ,\theta )$ is called *positive implicative* if and only if$$\left( x\ast y\right) \ast z=\left( x\ast z\right) \ast (y\ast z),$$for all $x,y,z\in X.$
The *partial order* relation on a BCK-algebra is defined such that $x\leq y$ if and only if $x\ast y=\theta .$
If in the BCK-algebra $(X,\ast ,\theta )$ there is an element $1$ such that $x\leq 1,$ for all $x\in X,$ then the algebra $X$ is called a *bounded BCK-algebra*. In a bounded BCK-algebra, we denote $1\ast x=\overline{x}.$
If in the bounded BCK-algebra $X$, an element $x\in X$ satisfies the relation $$\overline{\overline{x}}=x,$$then the element $x$ is called an *involution*.
If $(X,\ast ,\theta )$ and $(Y,\circ ,\theta )$ are two BCK-algebras, a map $f:X\rightarrow Y$ with the property $f\left( x\ast y\right) =f\left(
x\right) \circ f\left( y\right) ,$ for all $x,y\in X,$ is called a *BCK-algebras morphism*$.$ If $f$ is a bijective map, then $f$ is an *isomorphism* of BCK-algebras.
**Definition 1.3.** 1) Let $(X,\ast ,\theta )$ be a BCK algebra and $Y$ be a nonempty subset of $X$. Therefore, $Y$ is called a *subalgebra* of the algebra $(X,\ast ,\theta )$ if and only if for each $x,y\in Y$, we have $x\ast y\in Y$. This implies that $Y$ is closed to the binary multiplication “$\ast $”.
It is well known that each BCK-algebra of degree $n+1$ contains a subalgebra of degree $n.$
2\) Let $(X,\ast ,\theta )$ be a BCK algebra and $I$ be a nonempty subset of $X$. Therefore, $I$ is called an *ideal* of the algebra $X$ if and only if for each $x,y\in X$ we have:
i\) $\theta \in I;$
ii\) $x\ast y\in I$ and $y\in I,$ then $x\in I$.
**Proposition 1.4.** (\[Me-Ju; 94\]) *Let* $(X,\ast ,\theta )$ *be a BCK algebra and* $Y$ *be a subalgebra of the algebra* $(X,\ast ,\theta )$. *The following statements are true:*
*i)* $\theta \in Y;$
*ii)* $(Y,\ast ,\theta )$ *is also a BCK-algebra.*$\Box
\medskip $
Let $X$ be a BCK-algebra, such that $1\notin X$. On the set $Y=X\cup \{1\},$ we define the following multiplication $"\circ "$, as follows:$$x\circ y=\left\{
\begin{array}{c}
x\ast y,~if~x,y\in X; \\
\theta ,~if~x\in X,y=1; \\
1,~if~x=1~and~y\in X; \\
\theta ,~if~x=y=1.\end{array}\right.$$
The obtained algebra $\left( Y,\circ ,\theta \right) $ is a bounded BCK-algebra and is obtained by the so-called Iseki’s extension. The algebra $\left( Y,\circ ,\theta \right) $ is called t*he algebra obtained from algebra* $\left( X,\ast ,\theta \right) $ *by Iseki’s extension* (\[Me-Ju; 94\], Theorem 3.6).
**Remark 1.5.** (\[Me-Ju; 94\])
i\) The Iseki’s extension of a positive implicative BCK-algebra is still a positive implicative BCK-algebra.
ii\) The Iseki’s extension of a commutative BCK-algebra, in general, is not a commutative BCK-algebra.
iii\) Let $X$ be a BCK-algebra and $Y$ its Iseki’s extension. Therefore $X$ is an ideal in $Y$.
iv\) If $I$ is an ideal of the BCK-algebra $X$, $x\in I$ and $y\leq x$, then $y\in I$.
In the following, we will give some examples of finite bounded commutative BCK-algebras. In the finite case, it is very useful to have many examples of such algebras. But, such examples, in general, are not so easy to found. A method for this purpose can be Iseki’s extension. But, from the above, we remark that the Iseki’s extension can’t be always used to obtain examples of finite commutative bounded BCK-algebras with given initial properties, since the commutativity, or other properties, can be lost. From this reason, we use other technique to provide examples of such algebras. We use the connections between finite commutative bounded BCK-algebras and Wajsberg algebras and the algorithm and examples given in the papers \[FHSV; 19\] and \[FV; 19\]. $$$$
**2. Connections between finite bounded commutative BCK-algebras and Wajsberg algebras**
$$$$
**Definition 2.1. **(\[CHA; 58\]) An abelian monoid $\left( X,\theta
,\oplus \right) $ is called *MV-algebra* if and only if we have an unary operation $"^{\prime }"$ such that:
i\) $(x^{\prime })^{\prime }=x;$
ii\) $x\oplus \theta ^{\prime }=\theta ^{\prime };$
iii\) $\left( x^{\prime }\oplus y\right) ^{\prime }\oplus y=$ $\left(
y^{\prime }\oplus x\right) ^{\prime }\oplus x$, for all $x,y\in X.$(\[Mu; 07\]). We denote it by $\left( X,\oplus ,^{\prime },\theta \right) .\medskip $
We remark that in an MV-algebra the constant element $\theta ^{\prime }$ is denoted with $1$. This is equivalent with
$$1=\theta ^{\prime }.$$
With the above definitions, the following multiplications are also defined:
$$x\odot y=\left( x^{\prime }\oplus y^{\prime }\right) ^{\prime },$$
$$x\ominus y=x\odot y^{\prime }=\left( x^{\prime }\oplus y\right) ^{\prime }.$$
(see (\[Mu; 07\]))
From \[COM; 00\], Theorem 1.7.1, for a bounded commutative BCK-algebra $\left(
X,\ast ,\theta ,1\right) $, if we define $$x^{\prime }=1\ast x,$$$$x\oplus y=1\ast \left( \left( 1\ast x\right) \ast y\right) =\left( x^{\prime
}\ast y\right) ^{\prime },x,y\in X,$$we obtain that the algebra $\left( X,\oplus ,^{\prime },\theta \right) $ is an *MV*-algebra, with $$x\ominus y=x\ast y.$$
The converse is also true, that means if $\left( X,\oplus ,\theta ,^{\prime
}\right) $ is an MV-algebra, then $\left( X,\ominus ,\theta ,1\right) $ is a bounded commutative BCK-algebra.
**Definition 2.2.** (\[COM; 00\], Definition 4.2.1) An algebra $\left(
W,\circ ,\overline{\phantom{x}}~,1\right) $ of type $\left( 2,1,0\right) ~$is called a *Wajsberg algebra (*or* W-algebra)* if and only if the following conditions are fulfilled:
i\) $1\circ x=x;$
ii\) $\left( x\circ y\right) \circ \left[ \left( y\circ z\right) \circ \left(
x\circ z\right) \right] =1;$
iii\) $\left( x\circ y\right) \circ y=\left( y\circ x\right) \circ x;$
iv\) $\left( \overline{x}\circ \overline{y}\right) \circ \left( y\circ
x\right) =1$, for every $x,y,z\in W$.
**Remark 2.3.** (\[COM; 00\], Lemma 4.2.2 and Theorem 4.2.5)
i\) For the Wajsberg algebra $\left( W,\circ ,\overline{\phantom{x}},1\right)
$, if we define the following multiplications $$x\odot y=\overline{\left( x\circ \overline{y}\right) }$$and $$x\oplus y=\overline{x}\circ y,$$for all $x,y\in W$, we obtain that $\left( W,\oplus ,\odot ,\overline{\phantom{x}},\theta ,1\right) $ is an MV-algebra.
ii\) Conversely, if $\left( X,\oplus ,\odot ,^{\prime },\theta ,1\right) $ is an MV-algebra, defining on $X$ the operation$$x\circ y=x^{\prime }\oplus y,$$we obtain that $\left( X,\circ ,^{\prime },1\right) $ is a Wajsberg algebra.
**Remark 2.4**. From the above, if $\left( W,\circ ,\overline{\phantom{x}},1\right) $ is a Wajsberg algebra, then $\left( W,\oplus ,\odot ,\overline{\phantom{x}},0,1\right) $ is an MV-algebra, with $$x\ominus y=\overline{\left( \overline{x}\oplus y\right) }=\overline{\left(
x\circ y\right) }. \tag{2.1.}$$Defining $$x\ast y=\overline{\left( x\circ y\right) }, \tag{2.2.}$$we have that $(W,\ast ,\theta ,1)$ is a bounded commutative BCK-algebra.
Using the above remark, starting from some known finite examples of Wajsberg algebras given in the papers \[FHSV; 19\] and \[FV; 19\], we can obtain examples of finite commutative bounded BCK-algebras, using the following algorithm.
**The Algorithm**
1\) Let $n$ be a natural number, $n\neq 0$ and $$n=r_{1}r_{2}...r_{t},r_{i}\in \mathbb{N},1<r_{i}<n,i\in \{1,2,...,t\},$$be the decomposition of the number $n$ in factors. The decompositions with the same terms, but with other order of them in the product, will be counted one time. The number of all such decompositions will be denoted with $\pi
_{n}$.
2\) There are only $\pi _{n}$ nonismorphic, as ordered sets, Wajsberg algebras with $n$ elements. We obtain these algebras as a finite product of totally ordered Wajsberg algebras (see \[BV; 10\] and **\[**FV; 19**\],** Theorem 4.8 **)**.
3\) Using Remark 2.4 from above, a commutative bounded BCK-algebra can be associated to each Wajsberg algebra.
$$$$
**3. Examples of finite commutative bounded BCK-algebras**
$$$$
In the following, we use some examples of Wajsberg algebras given in the paper \[FHSV; 19\]. To these algebras, we will associate the corresponding commutative bounded BCK-algebras and we give their subalgebras and ideals.
**Example 3.1.** Let $W=\{O\leq A\leq B\leq E\}$ be a totally ordered set. On $W$ we define the multiplication $\circ _{1}~$as in the below table, such that $\left( W,\circ _{1},E\right) $ is a Wajsberg algebra. We have $\overline{A}=B$ and $\overline{B}=A$.
$$\begin{tabular}{l|llll}
$\circ _{1}$ & $O$ & $A$ & $B$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ \\
$A$ & $B$ & $E$ & $E$ & $E$ \\
$B$ & $A$ & $B$ & $E$ & $E$ \\
$E$ & $O$ & $A$ & $B$ & $E$\end{tabular}\text{.}$$
(see \[FHSV; 19\], Example 4.1.1)
Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{1},O\right) ~$has multiplication given in the below table:$$\begin{tabular}{l|llll}
$\ast _{1}$ & $O$ & $A$ & $B$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ \\
$A$ & $A$ & $O$ & $O$ & $O$ \\
$B$ & $B$ & $A$ & $O$ & $O$ \\
$E$ & $E$ & $B$ & $A$ & $O$\end{tabular}\text{.}$$
The proper subalgebras of this algebra are: $\{O,A\},\{O,B\},\{O,E\},\{O,A,B\}$. There are no proper ideals in the algebra $\left( W,\ast _{1},O\right) $.
**Example 3.2.** Let $W=\{O\leq A\leq B\leq E\}$ be a totally ordered set. On $W$ we define the multiplication $\circ _{2}$ as in the below table, such that $\left( W,\circ _{2},E\right) $ is a Wajsberg algebra. $$\begin{tabular}{l|llll}
$\circ _{2}$ & $O$ & $A$ & $B$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ \\
$A$ & $B$ & $E$ & $B$ & $E$ \\
$B$ & $A$ & $A$ & $E$ & $E$ \\
$E$ & $O$ & $A$ & $B$ & $E$\end{tabular}\text{.}$$
Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{2},O\right) $ has multiplication given below: $$\begin{tabular}{l|llll}
$\ast _{2}$ & $O$ & $A$ & $B$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ \\
$A$ & $A$ & $O$ & $A$ & $O$ \\
$B$ & $B$ & $B$ & $O$ & $O$ \\
$E$ & $E$ & $B$ & $A$ & $O$\end{tabular}\text{.}$$The proper subalgebras of this algebra are: $\{O,A\},\{O,B\},\{O,E\},\{O,A,B\}$. The proper ideals are: $\{O,A\},\{O,B\}$.
**Example 3.3.** Let $W=\{O\leq A\leq B\leq C\leq D\leq E\}$ be a totally ordered set. On $W$ we define a multiplication $\circ _{3}~$given in the below table, such that $\left( W,\circ _{3},E\right) $ is a Wajsberg algebra. We have $\overline{A}=D$, $\overline{B}=C$, $\overline{C}=B$, $\overline{D}=A$. $$\begin{tabular}{l|llllll}
$\circ _{3}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$A$ & $D$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$B$ & $C$ & $D$ & $E$ & $E$ & $E$ & $E$ \\
$C$ & $B$ & $C$ & $D$ & $E$ & $E$ & $E$ \\
$D$ & $A$ & $B$ & $C$ & $D$ & $E$ & $E$ \\
$E$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$\end{tabular}.$$Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{3},O\right) $ has multiplication defined below:
$$\begin{tabular}{l|llllll}
$\ast _{3}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$A$ & $A$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$B$ & $B$ & $A$ & $O$ & $O$ & $O$ & $O$ \\
$C$ & $C$ & $B$ & $A$ & $O$ & $O$ & $O$ \\
$D$ & $D$ & $C$ & $B$ & $A$ & $O$ & $O$ \\
$E$ & $E$ & $D$ & $C$ & $B$ & $A$ & $O$\end{tabular}.$$
The proper subalgebras of this algebra are: $\{O,A\},\{O,B\},\{O,C\},\{O,D\}, $$\{O,E\},\{O,A,B\},\{O,B,D\},\{O,A,B,C\},\{O,A,B,C,D\}$ . This algebra has no proper ideals.
**Example 3.4.** Let $W=\{O\leq A\leq B\leq C\leq D\leq E\}$ be a totally ordered set. On $W$ we define a multiplication $\circ _{4}~$given in the below table, such that $\left( W,\circ _{4},E\right) $ is a Wajsberg algebra. We have $$\begin{tabular}{l|llllll}
$\circ _{4}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$A$ & $D$ & $E$ & $E$ & $D$ & $E$ & $E$ \\
$B$ & $C$ & $D$ & $E$ & $C$ & $D$ & $E$ \\
$C$ & $B$ & $B$ & $B$ & $E$ & $E$ & $E$ \\
$D$ & $A$ & $B$ & $B$ & $D$ & $E$ & $E$ \\
$E$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$\end{tabular}\ .$$Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{4},O\right) $ has multiplication given in the following table: $$\begin{tabular}{l|llllll}
$\ast _{4}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$A$ & $A$ & $O$ & $O$ & $A$ & $O$ & $O$ \\
$B$ & $B$ & $A$ & $O$ & $B$ & $A$ & $O$ \\
$C$ & $C$ & $C$ & $C$ & $O$ & $O$ & $O$ \\
$D$ & $D$ & $C$ & $C$ & $A$ & $O$ & $O$ \\
$E$ & $E$ & $D$ & $C$ & $B$ & $A$ & $O$\end{tabular}\ \text{.}$$The proper subalgebras of this algebra are: $\{O,A\},\{O,B\},\{O,C\},\{O,D\}, $$\{O,E\},\{O,A,B\},\{O,A,B,C\},\{O,A,B,C,D\},\{O,A,C\},\{O,A,C,D\}$. The proper ideals of this algebra are: $\{O,A,B\},\{O,C\}$.
**Example 3.5.** Let $W=\{O\leq A\leq B\leq C\leq D\leq E\}$ be a totally ordered set. On $W$ we define a multiplication $\circ _{5}~$given in the below table, such that $\left( W,\circ _{5},E\right) $ is a Wajsberg algebra. We have $$\begin{tabular}{l|llllll}
$\circ _{5}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$A$ & $C$ & $E$ & $A$ & $D$ & $D$ & $E$ \\
$B$ & $D$ & $E$ & $E$ & $D$ & $D$ & $E$ \\
$C$ & $A$ & $E$ & $A$ & $E$ & $E$ & $E$ \\
$D$ & $B$ & $A$ & $B$ & $A$ & $E$ & $E$ \\
$E$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$\end{tabular}.$$Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{5},O\right) $ has multiplication defined in the following table:$$\begin{tabular}{l|llllll}
$\ast _{5}$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$A$ & $A$ & $O$ & $C$ & $B$ & $B$ & $O$ \\
$B$ & $B$ & $O$ & $O$ & $B$ & $B$ & $O$ \\
$C$ & $C$ & $O$ & $C$ & $O$ & $O$ & $O$ \\
$D$ & $D$ & $C$ & $D$ & $C$ & $O$ & $O$ \\
$E$ & $E$ & $C$ & $D$ & $A$ & $B$ & $O$\end{tabular}.$$
The proper subalgebras of this algebra are: $\{O,A\},\{O,B\},\{O,C\},\{O,D\}, $$\{O,E\},\{O,B,C\},\{O,C,D\},\{O,A,B,C\},\{O,A,B,C,D\}.$
All proper ideals are: $\{O,C,D\}$, $\{O,B\}$.
**Example 3.6.** Let $W=\{O\leq X\leq Y\leq Z\leq T\leq U\leq V\leq E\}$ be a totally ordered set. On $W$ we define a multiplication $\circ _{6}~$which can be found in the below table. We obtain that $\left( W,\circ
_{6},E\right) $ is a Wajsberg algebra.We have $\overline{X}=V$, $\overline{Y}=U$, $\overline{Z}=T$. Therefore the algebra $W$ has the following multiplication table:$$\begin{tabular}{l|llllllll}
$\circ _{6}$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$X$ & $V$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$Y$ & $U$ & $V$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$Z$ & $T$ & $U$ & $V$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$T$ & $Z$ & $T$ & $U$ & $V$ & $E$ & $E$ & $E$ & $E$ \\
$U$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ & $E$ & $E$ \\
$V$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ & $E$ \\
$E$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$\end{tabular}.$$From here, we get thet the associated commutative bounded BCK-algebras $\left( W,\ast _{6},O\right) $ has multiplication given in the below table:$$\begin{tabular}{l|llllllll}
$\ast _{6}$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$X$ & $X$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$Y$ & $Y$ & $X$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$Z$ & $Z$ & $Y$ & $X$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$T$ & $T$ & $Z$ & $Y$ & $X$ & $O$ & $O$ & $O$ & $O$ \\
$U$ & $U$ & $T$ & $Z$ & $Y$ & $X$ & $O$ & $O$ & $O$ \\
$V$ & $V$ & $U$ & $T$ & $Z$ & $Y$ & $X$ & $O$ & $O$ \\
$E$ & $E$ & $V$ & $U$ & $T$ & $Z$ & $T$ & $X$ & $O$\end{tabular}\text{.}$$The proper subalgebras of this algebra are: $\{O,J\},J\in \{X,Y,Z,T,U,V,E\},$$\{O,X,Y\},\{O,X,Y,Z\},\{O,X,Y,Z,T\},\{O,X,Y,Z,T,U\},\{O,X,Y,Z,T,U,V\}$. There are no proper ideals.
**Example 3.7.** Let $W=\{O\leq X\leq Y\leq Z\leq T\leq U\leq V\leq E\}$ be a totally ordered set. On $W$ we define a multiplication $\circ _{7}~$given in the below table, such that $\left( W,\circ _{7},E\right) $ is a Wajsberg algebra. ** ** $$\begin{tabular}{l|llllllll}
$\circ _{7}$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ \\ \hline
$O$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$X$ & $V$ & $E$ & $V$ & $E$ & $V$ & $E$ & $V$ & $E$ \\
$Y$ & $U$ & $U$ & $E$ & $E$ & $E$ & $E$ & $E$ & $E$ \\
$Z$ & $T$ & $U$ & $V$ & $E$ & $V$ & $E$ & $V$ & $E$ \\
$T$ & $Z$ & $Z$ & $U$ & $U$ & $E$ & $E$ & $E$ & $E$ \\
$U$ & $Y$ & $Z$ & $T$ & $U$ & $T$ & $E$ & $V$ & $E$ \\
$V$ & $X$ & $X$ & $Z$ & $Z$ & $U$ & $U$ & $E$ & $E$ \\
$E$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$\end{tabular}.$$Therefore, the associated commutative bounded BCK-algebras $\left( W,\ast
_{7},O\right) $ has multiplication defined in the below table:$$\begin{tabular}{l|llllllll}
$\ast _{7}$ & $O$ & $X$ & $Y$ & $Z$ & $T$ & $U$ & $V$ & $E$ \\ \hline
$O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$X$ & $X$ & $O$ & $X$ & $O$ & $X$ & $O$ & $X$ & $O$ \\
$Y$ & $Y$ & $Y$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\
$Z$ & $Z$ & $Y$ & $X$ & $O$ & $X$ & $O$ & $X$ & $O$ \\
$T$ & $T$ & $T$ & $Y$ & $Y$ & $O$ & $O$ & $O$ & $O$ \\
$U$ & $U$ & $T$ & $Z$ & $Y$ & $Z$ & $O$ & $X$ & $O$ \\
$V$ & $V$ & $V$ & $T$ & $T$ & $Y$ & $Y$ & $O$ & $O$ \\
$E$ & $E$ & $V$ & $U$ & $T$ & $Z$ & $Y$ & $X$ & $O$\end{tabular}.$$The proper subalgebras of this algebra are: $\{O,J\},J\in \{X,Y,Z,T,U,V,E\},$$\{O,X,Y\},\{O,X,Y,Z\},\{O,T,Y\},\{O,T,Y,V\},\{O,X,Y,Z,T\},\{O,X,Y,Z,T,U\},$$\{O,X,Y,Z,T,U,V\}.$
All proper ideals are:$~\{O,Y,T,V\}$, $\{O,X\}$.
**Conclusions.** In this chapter, we provided an algorithm for finding examples of finite commutative bounded BCK-algebras, using their connections with Wajsberg algebras. This algorithm allows us to find such examples no matter the order of the algebra. This thing is very useful, since examples of such algebras are very rarely encountered in the specialty books.
$$$$
**References**$$$$
\[AAT; 96\] Abujabal, H.A.S., Aslam, M., Thaheem, A.B., *A representation of bounded commutative BCK-algebras*, Internat. J. Math. & Math. Sci., 19(4)(1996), 733-736.
\[BV; 10\] Belohlavek, R., Vilem Vychodil, V., *Residuated Lattices of Size* $\leq 12$, Order, 27(2010), 147-161.
\[CHA; 58\] Chang, C.C.,* Algebraic analysis of many-valued logic*, Trans. Amer. Math. Soc. 88(1958), 467-490.
\[COM; 00\] Cignoli, R. L. O, Ottaviano, I. M. L. D, Mundici, D., *Algebraic foundations of many-valued reasoning*, Trends in Logic, Studia Logica Library, Dordrecht, Kluwer Academic Publishers, 7(2000).
\[Du; 99\] W.A. Dudek, *On embedding Hilbert algebras in BCK-algebras*, Mathematica Moravica, **3(1999)**, 25-28.
\[FHSV; 19\] Flaut, C., Hošková-Mayerová, Š., Saeid, A., B., Vasile, R., *Wajsberg algebras of order* $n,n\leq 9$, https://arxiv.org/pdf/1905.05755.pdf
\[FV; 19\] Flaut, C., Vasile, R., *Wajsberg algebras arising from binary block codes*, https://arxiv.org/pdf/1904.07169.pdf
\[II; 66\] Imai, Y., Iseki, K., *On axiom systems of propositional calculi*, Proc. Japan Academic, **42(1966)**, 19-22.
\[JS; 11\] Jun, Y. B., Song, S. Z., *Codes based on BCK-algebras*, Inform. Sciences., **181(2011)**, 5102-5109.
\[Me-Ju; 94\] Meng, J., Jun, Y. B., *BCK-algebras*, Kyung Moon Sa Co. Seoul, Korea, 1994.
\[Mu; 07\] Mundici, D., *MV-algebras-a short tutorial*, Department of Mathematics *Ulisse Dini*, University of Florence, 2007.$$$$
Cristina FLAUT
[Faculty of Mathematics and Computer Science, ]{}
[Ovidius University of Constanţa, România,]{}
[Bd. Mamaia 124, 900527,]{}
[http://www.univ-ovidius.ro/math/]{}
[e-mail: cflaut@univ-ovidius.ro; cristina\_flaut@yahoo.com]{}$$$$
Šárka Hošková-Mayerová
[Department of Mathematics and Physics,]{}
[University of Defence, Brno, Czech Republic]{}
[e-mail: sarka.mayerova@unob.cz]{}$$$$
Radu Vasile,
[PhD student at Doctoral School of Mathematics,]{}
[Ovidius University of Constanţa, România]{}
[rvasile@gmail.com]{}
|
---
abstract: 'This paper presents a new generalized Mackey-Glass model with a non-linear harvesting term and mixed delays. The main purpose of this work is to study the existence and the exponential stability of the pseudo almost periodic solution for the considered model. By using fixed point theorem and under suitable Lyapunov functional, sufficient conditions are given to study the pseudo almost periodic solution for the considered model. Moreover, an illustrative example is given to demonstrate the effectiveness of the obtained results.'
author:
- Haifa Ben Fredj
- Farouk Chérif
date: 'Received: date / Accepted: date'
title: 'Positive pseudo almost periodic solutions to a class of hematopoiesis model: Oscillations and Dynamics'
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction {#intro}
============
In 1977, Mackey and Glass [@L] proposed the following non-linear differential equation with constant delay $$x'(t) = -\alpha x(t) + \dfrac{\beta x(t-\tau)}{\theta^n + x^n(t-\tau)},\quad 0 <n.$$ in order to describe the concentration of mature cells in the blood concentration. Here $\alpha$, $\beta$, $\tau$ and $\theta$ are positive constants, the unknown $x$ stands for the density of mature cells in blood circulation, $\alpha$ is the rate of lost cells from the circulation at time t, the flux
$f(x(t-\tau)):= \dfrac{\beta x(t-\tau)}{\theta^n+ x^n(t-\tau)}$
of cells in the circulation depends on $x(t - \tau)$ at the time $t -\tau$, where $\tau$ is the time delay between the production of immature cells in the bone marrow and their maturation.\
\
Since its introduction in the literature, the hematopoiesis model has gained a lot of attention and various extensions. Hence, under some additional conditions some authors [@K1; @K2; @M1; @N1] considered an extended version of eq.(1) and obtained the existence and attractivity of the unique positive periodic and almost periodic solutions of the following model $$x(t)= {\displaystyle -a(t) x(t) +\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}},\quad 0\leq m\leq 1, 0<n.$$
Recently, there have been extensive and valuable contributions dealing with oscillations of the hematopoiesis model with and without delays, see, e.g., [@K1; @B; @K2; @O; @H] and references therein.\
Also, the stability of various models was strongly investigated by many authors recently [@M; @A; @M1; @F; @B1; @N] and references therein.\
\
As we all know, in real-world applications equations with a harvesting term provide generally a more realistic and reasonable description for models of mathematical biology and in particular the population dynamics. Hence, the investigation of biological dynamics with harvesting is a meaningful subject in the exploitation of biological resources which is related to the optimal management of renewable resources [@K; @P].\
\
Besides, the study oscillations and dynamics systems of biological origin is an exciting topic. One can find a valuable results in this field [@H1; @H2; @B0; @M0; @M2] and references therein.\
\
Motivated by the discussion above the main subject of this paper is to study the existence and the global attractor of the unique and positive pseudo almost periodic solution for the generalized Mackey-Glass model with a nonlinear harvesting term and mixed delays. Roughly speaking, we shall consider the following hematopoiesis model $$\begin{aligned}
x'(t)=-a(t)x(t)+\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}- H(t,x(t-\sigma(t)), \quad 1<m \leq n, t\in {\mathbb{R}}.\end{aligned}$$ However, to the author’s best knowledge, there are no publications considering the pseudo and positive almost periodic solutions for Mackey-Glass model with harvesting term and $1<m\leq n$.\
\
The remainder of this paper is organized as follows: In Section 1, we will introduce some necessary notations, definitions and fundamental properties of the space PAP(${\mathbb{R}}$,${\mathbb{R}}^+$) which will be used in the paper. In Section 2, the model is given. In section 3, the existence of the unique positive pseudo almost periodic solution for the considered system is established. Section 4 is devoted to the stability of the pseudo almost periodic solution. In Section 5, based on suitable Lyapunov function and Dini derivative, we give some sufficient conditions to ensure that all solutions converge exponentially to the positive pseudo almost periodic solution of the equation (3). At last, an illustrative example is given. It should be mentioned that the main results of this paper are theorems 1, 2.
Preliminaries {#sec:1}
=============
In this section, we would like to recall some basic definitions and lemmas which are used in what follows. In this paper, $BC({\mathbb{R}}, {\mathbb{R}})$ denotes the set of bounded continued functions from ${\mathbb{R}}$ to ${\mathbb{R}}$. Note that $(BC({\mathbb{R}}, {\mathbb{R}}), \|.\|_{\infty})$ is a Banach space where the sup norm $$\|f\|_{\infty} :=\underset{t\in {\mathbb{R}}}{sup} |f(t)|.$$
Let u(.) $\in BC({\mathbb{R}}, {\mathbb{R}})$, u(.) is said to be almost periodic (a.p) on ${\mathbb{R}}$ if, for any $\epsilon > 0$, the set $$T(u, \epsilon) = \{\delta; |u(t + \delta) - u(t)| < \epsilon, \text{ for all }t \in {\mathbb{R}}\}$$ is relatively dense; that is, for any $\epsilon > 0$, it is possible to find a real number $l = l(\epsilon) > 0$; for any interval with length l($\epsilon$), there exists a number $\delta= \delta(\epsilon)$ in this interval such, that $$|u(t + \delta) - u(t)| <\epsilon, \text{ for all }t \in {\mathbb{R}}.$$
We denote by $AP({\mathbb{R}}, {\mathbb{R}})$ the set of the almost periodic functions from ${\mathbb{R}}$ to ${\mathbb{R}}$.
Let $u_i (.)$ , $1 \leq i \leq m$ denote almost periodic functions and $\epsilon > 0$ be an arbitrary real number. Then there exists a positive real number $L = L(\epsilon)> 0$ such that every interval of length L contains at least one common $\epsilon-almost$ period of the family of functions $ u_i (.)$, $1 \leq i \leq m.$
Besides, the concept of pseudo almost periodicity (p.a.p) was introduced by Zhang [@D] in the early nineties. It is a natural generalization of the classical almost periodicity. Precisely, define the class of functions $PAP_0({\mathbb{R}}, {\mathbb{R}})$ as follows $$\bigg\{f \in BC({\mathbb{R}}, {\mathbb{R}}); \underset{T\rightarrow+\infty}{lim} \dfrac{1}{2T} \int_{-T}^T |f(t)|dt = 0\bigg\} .$$ A function $f \in BC({\mathbb{R}}, {\mathbb{R}})$ is called pseudo almost periodic if it can be expressed as $f = h + \phi$, where $h \in AP({\mathbb{R}}, {\mathbb{R}})$ and $\phi \in PAP_0({\mathbb{R}}, {\mathbb{R}})$. The collection of such functions will be denoted by $PAP({\mathbb{R}}, {\mathbb{R}})$. The functions h and $\phi$ in the above definition are, respectively, called the almost periodic component and the ergodic perturbation of the pseudo almost periodic function $f$. The decomposition given in definition above is unique. It should be mentioned that pseudo almost periodic functions possess many interesting properties; we shall need only a few of them and for the proofs we shall refer to [@D; @D1; @M3].
Observe that (PAP(${\mathbb{R}}$, ${\mathbb{R}}$), $\|.\|_{\infty}$) is a Banach space and $AP({\mathbb{R}}, {\mathbb{R}})$ is a proper subspace of $PAP({\mathbb{R}}, {\mathbb{R}})$ since the function $\psi(t) = cos(2 t) +sin(2 \sqrt{5}t) +exp^{-t^2 |sin (t)|}$ is pseudo almost periodic function but not almost periodic.
[@D] If f,g $\in PAP({\mathbb{R}},{\mathbb{R}})$, then the following assertions hold:
1. f.g, f+g $\in PAP({\mathbb{R}}, {\mathbb{R}})$.
2. $\dfrac{f}{g} \in PAP({\mathbb{R}}, {\mathbb{R}})$, if $\underset{t\in {\mathbb{R}}}{inf}|g(t)|>0$.
[@D] Let $\Omega \subseteq {\mathbb{R}}$ and let K be any compact subset of $\Omega$. On define the class of functions\
$PAP_0(\Omega\times{\mathbb{R}}, {\mathbb{R}})$ as follows $$\bigg\{\psi\in C(\Omega\times {\mathbb{R}}; {\mathbb{R}}); \underset{T\rightarrow+\infty}{lim} \dfrac{1}{2T} \int_{-T}^T |\psi(s,t)|dt = 0\bigg\}$$ uniformly with respect to $s\in K$.
(Definition 2.12, [@E])
Let $\Omega \subseteq {\mathbb{R}}$. An continuous function f : ${\mathbb{R}}\times\Omega \longrightarrow {\mathbb{R}}$ is called pseudo almost periodic (p.a.p). in t uniformly with respect $x \in \Omega$ if the two following conditions are satisfied :\
i) $\forall x \in \Omega, f(., x) \in PAP({\mathbb{R}},{\mathbb{R}}),$\
ii) for all compact K of $\Omega$, $\forall \epsilon> 0, \exists \delta > 0, \forall t \in \mathbb{R}, \forall x_1, x_2 \in K$,
$|x_1 - x_2| \leq \delta \Rightarrow |f(t, x_1) - f(t, x_2)| \leq \epsilon$.
Denote by $PAP_U(\Omega\times {\mathbb{R}}; {\mathbb{R}})$ the set of all such functions.
The model {#sec:2}
=========
In order to generalize and improve the above models, let us consider the following Mackey-Glass model with a non-linear harvesting term and several concentrated delays $$\begin{aligned}
x'(t)=-a(t)x(t)+\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}- H(t,x(t-\sigma(t))\end{aligned}$$ where $t \in {\mathbb{R}}$ and
1. The function a : $\mathbb{R}\longrightarrow\mathbb{R^+}$ is pseudo almost periodic(p.a.p) and $\underset{t\in \mathbb{R}}{inf} a(t) >0.$
2. For all 1$\leq i \leq N$; the functions $\tau_i,\sigma$, b : $\mathbb{R}\longrightarrow\mathbb{R^+}$ are p.a.p.
3. The term H $\in PAP_U(\mathbb{R}\times\mathbb{R},{\mathbb{R}}^+$) satisfies the Lipschitz condition : $\exists L_H > 0$ such that $${\displaystyle \mid H(t,x)-H(t,y) \mid < L_H \mid x-y \mid,\quad \forall x,y,t \in \mathbb{R}}.$$
Throughout the rest of this paper, for every bounded function $f : {\mathbb{R}}\rightarrow {\mathbb{R}}$, we denote $$f^+ = \underset {t\in {\mathbb{R}}}{sup} f(t), f^- =\underset {t\in {\mathbb{R}}}{inf} f(t).$$ Pose $r =\underset{t \in \mathbb{R}}{sup} \bigg(\tau_i(t),\sigma(t) ; i=1,2...N\bigg).$ Denote by $BC ([-r, 0] , {\mathbb{R}}^+$) the set of bounded continuous functions from \[-r, 0\] to ${\mathbb{R}}^+$. If $x(.)$ is defined on $[-r + t_0, \sigma[$ with $t_0, \sigma \in {\mathbb{R}}$, then we define $x_t \in C([-r, 0] , {\mathbb{R}}$) where $x_t(\theta) = x(t + \theta)$ for all $\theta \in [-r, 0]$. Notice that we restrict our selves to ${\mathbb{R}}^+$-valued functions since only non-negative solutions of (4) are biologically meaningful. So, let us consider the following initial condition $$x_{t_0} = \phi,\quad \phi \in BC ([-r, 0] , {\mathbb{R}}^+) \text{ and }\phi (0) > 0.$$ We write $x_t (t_0, \phi)$ for a solution of the admissible initial value problem (4) and (5). Also, let $[t_0, \eta(\phi)[$ be the maximal right-interval of existence of $x_t(t_0, \phi)$.
Main results {#sec:3}
============
As pointed out in the introduction, we shall give here sufficient conditions which ensures existence and uniqueness of pseudo almost periodic solution of (4). In order to prove this result, we will state the following lemmas. For simplicity, we denote $x_t(t_0, \phi)$ by $x(t)$ for all $t\in {\mathbb{R}}$.
A positive solution $x(.)$ of model (4)-(5) is bounded on $[t_0, \eta(\phi)[$, and $\eta(\phi)=+\infty$.
We have for each $t \in [t_0, \eta(\phi)[$ the solution verifies $$x(t)= {\displaystyle e^{- \int^t_{t_0} a(u) du} \phi(0) +\int^t_{t_0} e^{-\int^t_{s} a(u) du}\bigg[\sum^{N}_{i=1} \dfrac{b_i(s)x^m(s-\tau_i(s))}{1 + x^n(s-\tau_i(s))}- H(t,x(s-\sigma(s))\bigg]ds}.$$ So, by $\underset{x\geq0}{sup}\dfrac{x^m}{1+x^n}\leq 1,\text{ }\forall 1<m\leq n$, we obtain $$\begin{array}{lll}
x(t)\leq {\displaystyle \phi(0) +\int^t_{t_0} e^{-a^-(t-s)}\sum^{N}_{i=1} b_i^+ds}&\leq {\displaystyle \phi(0) + \dfrac{1}{a^-}[1- e^{-a^-(t-t_0)}]\sum^{N}_{i=1} b_i^+}\\
&\leq \phi(0) + \dfrac{1}{a^-}\sum^{N}_{i=1} b_i^+< +\infty,
\end{array}$$ which proves that $x(.)$ is bounded. The second part of the conclusion is given by Thorem 2.3.1 in [@I], we have that $\eta(\phi)=+\infty$.
If $a^->\sum^N_{i=1}b_i^+,$ then each positive solution $x_t(t_0,\phi)$ of model (4)-(5) satisfies $$x(t) \underset{t \rightarrow +\infty}{\longrightarrow} 0.$$
We define the continuous function $$\begin{aligned}
G : [0,1]
&\longrightarrow& {\mathbb{R}}\\
y &\longmapsto& y-a^-+\sum^N_{i=1}b_i^+ e^{y t}.\end{aligned}$$ From the hypothesis, we obtain G(0)<0, then there exists $\lambda \in [0,1]$, where $$G(\lambda)<0. \qquad (C.1)$$ Let us consider the function $W(t)=x(t)e^{\lambda t}$. Calculating the left derivative $W(.)$ and by using the following inequality $$\dfrac{x^m}{1+x^n} \leq x ,\quad \forall 1<m\leq n.$$We obtain
$$\begin{aligned}
D^-W(t) &=& \lambda x(t)e^{\lambda t}+x'(t)e^{\lambda t}\\
\\
&=&\lambda x(t) e^{\lambda t}+e^{\lambda t}[-a(t)x(t)+\sum_{i=1}^{N} b_i(t) \frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-H(t,x(t-\sigma(t))]\\
\\ &\leq& e^{\lambda t}((\lambda- a^-)x(t) + \sum_{i=1}^{N} b^+_i x(t-\tau_i(t))).
\end{aligned}$$
Let us prove that $$\begin{aligned}
W(t) &=& x(t)e^{\lambda t}< e^{\lambda t_0} M = Q, \forall t\geq t_0.
\end{aligned}$$ Suppose that there exists $t_1>t_0$ such that $$\begin{aligned}
W(t_1) &=& Q, W(t)<Q, \text{ for all } t_0-r\leq t< t_1.
\end{aligned}$$ Then $$\begin{aligned}
0\leq D^-W(t_1)&\leq&(\lambda- a^-)x(t_1) e^{\lambda t_1}+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1))e^{\lambda t_1} \\
\\ &\leq& (\lambda- a^-) Q+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1))e^{\lambda t_1} e^{\lambda \tau_i} e^{-(\lambda \tau_i(t_1))}\\
\\ &=&(\lambda- a^-) Q+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1)) e^{\lambda (t_1-\tau_i(t_1))} e^{\lambda r}\\
\\&\leq& [\lambda- a^-+ \sum_{i=1}^{N} b^+_i e^{\lambda r}] Q\\
\\&<& 0 \qquad (\text{by \textbf{(C.1)}})
\end{aligned}$$
which is a contradiction. Consequently, $x(t) e^{\lambda t}< Q$. Then, $x(t)< e^{-\lambda t}Q \underset{t \rightarrow +\infty}{\longrightarrow} 0$.
Pose $f_{n,m}(u) =\dfrac{u^m}{1+u^n}$, one can get:\
: $$\left \{
\begin{array}{r c l}
f_{n,m}'(u) = \dfrac{u^{m-1}(m-(n-m)u^n)}{(1 + u^n)^2}> 0,\forall u \in \left[ 0,\sqrt[n]{\dfrac{m}{n-m} }\right] \qquad (C.3)\\
\\f_{n,m}'(u) = \dfrac{u^{m-1}(m-(n- m)u^n)}{(1 + u^n)^2}<0,\forall u \in \left] \sqrt[n]{\dfrac{m}{ n-m} },+\infty \right[ \qquad (C.4),
\end{array}
\right.$$\
and $m= n$: $$f_{n,m}'(u) = \dfrac{u^{m-1}m}{(1 + u^m)^2}> 0,\forall u \in [ 0,+\infty[.\qquad (C.5)$$
If $m<n$, one can choose $k \in \left]0, \sqrt[n]{\dfrac{m}{ n-m} }\right[ $ and combining with **(C.3)** and **(C.4)** there exists a constant\
$\overset{\backsim}{ k}>\sqrt[n]{\dfrac{m}{ n-m} } $ such that
$f_{n,m}(k)= f_{n,m}(\overset{\backsim}{k}).$ (C.6)
Moreover, $$\underset{u\geq0}{sup}\dfrac{u^m}{1+u^n}\leq 1, \quad \forall 1<m\leq n. \qquad (C.7)$$\
$$\text{Let }\mathcal{C}^0=\{\psi \in BC([-r,0] , \mathbb{R}^+); k\leq \psi \leq M\}.$$
A positive solution $ x(.)$ of the differential equation is permanent if there exists $t^*\geq 0$, A and B ; $B > A > 0$ such that $$A \leq x(t) \leq B \quad \text{ for }t \geq t^*.$$
Suppose that there exist a two positives constants M and k satisfying:
1. m<n: $$0<k < \sqrt[n]{\dfrac{m}{ n-m} } < M \leq \overset{\backsim}{k}\quad (\text {$\overset{\backsim}{k}$ was given by \textbf{(C.6)}})$$ $$0<k < M$$
2. ${\displaystyle -a^- M+\sum^N_{i=1}b_i^+ - H^-}<0$
3. ${\displaystyle -a^+ k+\sum^N_{i=1}b_i^- \dfrac{k^m}{1+k^n} -H^+}> 0$
and $\phi \in \mathcal{C}^0$, then the solution of (4)-(5) $ x(.)$ is permanent which $\eta(\phi)=+\infty$.
Actually, we prove that $x(.)$ is bounded in $[ t_0,\eta(\phi)[ .$\
$\bullet$ First, we claim that
$$x(t) < M, \forall t \in [ t_0,\eta(\phi)[ .\qquad (i)$$
Contrarily, there exists $t_1 \in ] t_0,\eta(\phi)[ $ such that: $$\left \{
\begin{array}{llll}
x(t)<M, \forall t \in \left[t_0-r,t_1\right[\\
x(t_1) = M
\end{array}
\right.$$
Calculating the right derivative of $x(.)$ and by $\textbf{(H2)}$ and **(C.7)**, we obtain $$\begin{array}{ll}
0 \leq x'(t_1)&=-a(t_1) x(t_1)+ \sum^{N}_{i=1} \dfrac{b_i(t_1)x^m(t_1-\tau_i(t_1))}{1 + x^n(t_1-\tau_i(t_1))}- H(t_1,x(t_1-\sigma(t_1))\\
\\&\leq -a(t_1) M +\sum^{N}_{i=1} b_i(t_1) -H^- \\
\\&< -a^- M +\sum^{N}_{i=1} b_i^+-H^-\\
\\& < 0,
\end{array}$$ which is a contradiction. So it implies that (i) holds.\
\
$\bullet $ Next, we prove that
$k < x(t) ,\forall t \in [ t_0,\eta(\phi)[ .$ (ii)
Otherwise, there exists $t_2 \in ] t_0,\eta(\phi)[ $ such that $$\left \{
\begin{array}{llll}
x(t)>k, \forall t \in \left[t_0-r,t_2\right[\\
x(t_2) = k
\end{array}
\right.$$
Calculating the right derivative of $x(.)$ and combining with $\textbf{(H3)}$, **(C.3)** and**(C.5)**, we obtain $$\begin{array}{ll}
0 \geq x'(t_2)&=-a(t_2) x(t_2)+ \sum^{N}_{i=1} \dfrac{b_i(t_2)x^m(t_2-\tau_i(t_2))}{1 + x^n(t_2-\tau_i(t_2))}- H(t_2,x(t_2-\sigma(t_2))\\
\\& \geq -a(t_2) k + \sum^{N}_{i=1} \dfrac{b_i(t_2) k^m}{1 +k^n}-H^+\\
\\& > -a^+ k + \sum^{N}_{i=1} \dfrac{b_i^- k^m}{1 +k^n}-H^+\\
\\& > 0,\\
\end{array}$$ which is a contradiction and consequnetly (ii) holds. From Thorem 2.3.1 in [@I], we have that $\eta(\phi)=+\infty$. The proof of Lemma 4.4 is now completed.
$$\text{Let }\mathcal{B}=\{\psi \in PAP(\mathbb{R} , \mathbb{R}); k\leq \psi \leq M\}.$$
$\mathcal{B}$ is a closed subset of $PAP({\mathbb{R}},{\mathbb{R}})$.
Let $(\psi_n)_{n \in {\mathbb{N}}} \subset \mathcal{B}$ such that $\psi_n \longrightarrow \psi$. $ \text{Let us prove that } \psi \in \mathcal{B}.$\
\
Clearly, $\psi \in PAP({\mathbb{R}},{\mathbb{R}})$ and we obtain that $$\begin{array}{lll} \psi_n \underset{n\longrightarrow +\infty}{\longrightarrow }\psi &\Leftrightarrow \forall \epsilon> 0, \exists n_0>0 \text{ such that } \mid \psi_n(t)-\psi(t)\mid \leq \epsilon, (\forall t \in {\mathbb{R}}, \forall n>n_0)\\&\Leftrightarrow \forall \epsilon> 0, \exists n_0>0 \text{ such that } -\epsilon \leq \psi_n(t)-\psi(t)\leq\epsilon,(\forall t \in {\mathbb{R}}, \forall n>n_0).\end{array}$$ Let t $\in {\mathbb{R}}$, we obtain then $$-\epsilon+ k \leq \psi(t)=[\psi(t)-\psi_n(t) ]+\psi_n(t) \leq \epsilon +M.$$ So, $\psi \in \mathcal{B}$.
If f $\in PAP_U({\mathbb{R}}\times {\mathbb{R}},{\mathbb{R}})$ and for each bounded subset B of ${\mathbb{R}}$, f is bounded on ${\mathbb{R}}\times B$, then the Nymetskii operator $$N_f : PAP({\mathbb{R}},{\mathbb{R}}) \longrightarrow PAP({\mathbb{R}}, {\mathbb{R}}) \text { with } N_f(u)=f(.,u(.))$$ is well defined.
[@G] Let f,g $\in AP({\mathbb{R}},{\mathbb{R}})$. If $g^->0$ then $F \in AP({\mathbb{R}},{\mathbb{R}})$ where $$F(t)={\displaystyle \int^t_{-\infty}e^{-\int^t_s g(u)du} f(s) ds}, \quad t\in{\mathbb{R}}.$$
[@D] Let F $\in PAP_U(\mathbb{R}\times {\mathbb{R}},{\mathbb{R}})$ verifies the Lipschitz condition: $\exists L_F >0$ such that $$| F(t,x)-F(t,y)| \leq L_F |x-y|,\quad \forall x,y\in {\mathbb{R}}\text{ and } t \in \mathbb{R}.$$
If h $\in PAP({\mathbb{R}},{\mathbb{R}})$, then the function $F(.,h(.)) \in PAP({\mathbb{R}},{\mathbb{R}})$.
If conditions $\textbf{(H1)-( H3)}$ and $$\textbf{[H4]}: {\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\sum^N_{i=1}b_i(t)\bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg\}}<0$$ are fulfilled, then the equation (4) has a unique p.a.p solution x(.) in the region $\mathcal{B}$, given by $${\displaystyle x(t)=\int^t_{-\infty} e^{-\int^t_s a(u)du}\sum^{N}_{i=1} \dfrac{b_i(s)x^m(s-\tau_i(s))}{1 + x^n(s-\tau_i(s))}- H(s,x(s-\sigma(s))ds}.$$
Step 1:\
Clearly $\mathcal{B}$ is a bounded set. Now, let $\psi \in \mathcal{B}$ and $f(t,z)=\psi(t-z)$, since the numerical application $\psi$ is continuous and the space PAP(${\mathbb{R}},{\mathbb{R}})$ is a translation invariant then the function $f \in PAP_U({\mathbb{R}}\times {\mathbb{R}}, {\mathbb{R}}^+)$. Furthermore $\psi$ is bounded, then $f$ is bounded on ${\mathbb{R}}\times B$. By the lemma 4, the Nymetskii operator $$\begin{aligned}
N_f : PAP({\mathbb{R}},{\mathbb{R}})&\longrightarrow & PAP({\mathbb{R}},{\mathbb{R}})
\\\tau_i &\longmapsto &f(.,\tau_i(.))\end{aligned}$$ is well defined for ${\displaystyle \tau_i \in PAP({\mathbb{R}},\mathbb{R})}$ such that $0 \leq i \leq N$. Consequently,$${\displaystyle \bigg[t\longmapsto\psi(t-\tau_i(t))\bigg]\in PAP({\mathbb{R}},\mathbb{R^+})}\text{ for all } i=1,...,N.$$
Since ${\displaystyle \underset{t\in {\mathbb{R}}}{inf} |1+\psi^n(t-\tau_i(t))|>0}$ and using properties 1, the p.a.p functions one has $${\displaystyle\bigg [ t \longmapsto \sum^{N}_{i=1} \dfrac{b_i(t)\psi^m(t-\tau_i(t))}{1 + \psi^n(t-\tau_i(t))} \bigg] \in PAP({\mathbb{R}},{\mathbb{R}})}.$$
Also, under the fact that the harvesting term verifies the Lipschitz condition, being the lemma 6, $${\displaystyle \bigg [G : t \longmapsto \sum^{N}_{i=1} \dfrac{b_i(t)\psi^m(t-\tau_i(t))}{1 + \psi^n(t-\tau_i(t))} -H(t,\psi(t-\sigma(t))\bigg] \in PAP({\mathbb{R}},\mathbb{R})}.$$
Step 2: Let us define the operator $\Gamma$ by $$\Gamma(\psi)(t)=\int^t_{-\infty} e^{-\int^t_s a(u)du} G(s) ds$$ We shall prove that $\Gamma$ maps $\mathcal{B}$ into itself. First, since the functions G(.) and a(.) are p.a.p one can write $$G=G_1+ G_2 \text{ and }a=a_1+ a_2,$$where $G_1,a_1\in AP({\mathbb{R}},{\mathbb{R}})$ and $G_2,a_2 \in PAP_0(\mathbb{R}, {\mathbb{R}})$. So, one can deduce $$\begin{array}{ll}
\Gamma(\psi)(t)&{\displaystyle=\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)ds+\int^t_{-\infty} \bigg[ e^{-\int^t_s a(u)du} G(s)-e^{-\int^t_s a_1(u)du}G_1(s)\bigg]ds}\\
\\&{\displaystyle=\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)ds+\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)\bigg[e^{-\int^t_s a_0(u)du}-1\bigg]ds}\\
\\&+ \int^t_{-\infty}e^{-\int^t_s a(u)du}G_2(s)ds\\
\\&{\displaystyle=I(t)+II(t)+III(t). }\end{array}$$\
By the lemma **5**, $I(.) \in AP({\mathbb{R}},{\mathbb{R}})$.\
\
Now, we show that II(.) is ergodic. One has $$\begin{array}{lll}
II(t)&={\displaystyle\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)\bigg[e^{-\int^t_s a_2(u)du}-1\bigg]ds}\\
\\&={\displaystyle \int^{+\infty}_0 e^{-\int^t_{t-v} a_1(u)du} G_1(t-v)\bigg[e^{-\int^t_{t-v} a_2(u)du}-1\bigg]dv}\\
\\&={\displaystyle \int^{v_0}_0 +\int^{+\infty}_{v_0} e^{-\int^{v}_0a_1(t-s)ds} G_1(t-v)\bigg[e^{-\int^{v}_0 a_2(t-s)ds}-1\bigg]dv}\\
\\&=II_1(t)+II_2(t).
\end{array}$$
Since $a^-_1\geq a^-$ and for large enough $v_0$, we obtain $$\begin{array}{lll}
|II_2(t)|&\leq {\displaystyle \int^{+\infty}_{v_0} e^{-\int^{v}_0 a_1(t-s)ds} |G_1(t-v)|\bigg[|e^{-\int^{v}_0 a_2(t-s)ds}|+1\bigg]dv}\\
\\&= {\displaystyle \int^{+\infty}_{v_0} \|G_1\|_{\infty}\bigg[e^{-\int^{v}_0 a(t-s)ds}+ e^{-\int^{v}_0 a_1(t-s)ds}\bigg]dv}\\
\\&\leq {\displaystyle \int^{+\infty}_{v_0} 2 \|G_1\|_{\infty}e^{- a^-v}dv <\dfrac{\epsilon}{2}}.
\end{array}$$ So, $II_2 \in PAP_0({\mathbb{R}},{\mathbb{R}})$.\
Thereafter, it has not yet been demonstrated that $II_1(.) \in PAP_0({\mathbb{R}},{\mathbb{R}})$.\
Firstly, we prove that the following function $$\mu(v,t)={\displaystyle \int^v_0 a_2(t-s)ds, \qquad (v\in [0,v_0], t\in {\mathbb{R}})}$$ is in $PAP_0([0,v_0] \times {\mathbb{R}}, {\mathbb{R}})$. Clearly $|\mu(v,t)|\leq {\displaystyle\int^v_0 |a_2(t-s)|ds}$, then it is obviously sufficient to prove that the function $${\displaystyle\int^v_0 |a_2(.-s)|ds \in PAP_0({\mathbb{R}},{\mathbb{R}}).}$$
We have $a_2 (.)\in PAP_0({\mathbb{R}},{\mathbb{R}})$, for $\epsilon >0$ there exists $T_0 >0$ such that $${\displaystyle \dfrac{1}{2T} \int^T_{-T} |a_2(t-s)|dt \leq \dfrac{\epsilon}{v}, \qquad (T\geq T_0, s\in [0,v])}.$$ Since $[0,v]$ is bounded, the Fubini’s theorem gives for $\epsilon >0$ there exists $T_0 >0$ such that $${\displaystyle \dfrac{1}{2T} \int^T_{-T} \int^v_0 |a_2(t-s)|ds dt\leq \epsilon, \qquad T\geq T_0}.$$ So, $${\displaystyle\int^v_0 |a_2(.-s)|ds \in PAP_0({\mathbb{R}},{\mathbb{R}})}$$ which implies the required result.\
Then, we obtain $$\begin{array}{ lll}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt } & {\displaystyle\leq \dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 e^{-\int^{v}_0 a_1(t-s)ds} |G_1(t-v)||e^{-\int^{v}_0 a_2(t-s)ds}-1|dv dt }\\
\\& ={\displaystyle\dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 |G_1(t-v)||e^{-\int^{v}_0 a(t-s)ds}-e^{-\int^{v}_0 a_1(t-s)ds}|dv dt }.
\end{array}$$ By the mean value theorem, $\exists \eta \in ]0,1[$ such that $$\begin{aligned}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt }\leq &&{\displaystyle \dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 |G_1(t-v)|e^{-[(1-\eta)\int^{v}_0 a(t-s)ds+ \eta \int^{v}_0 a_1(t-s)ds]}}\\
\\&&\times\bigg(\int^{v}_0|a_2(t-s)|ds\bigg) dv dt .\end{aligned}$$
Since the function $\mu \in PAP_U([0,x_0] \times {\mathbb{R}}, {\mathbb{R}})$ and in virtue of the Fubini’s theorem for $\epsilon>0$, $\exists T_1>0$ such that $$\begin{array}{lll}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt }&{\displaystyle\leq \int^{v_0}_0 \|G_1\|_{\infty} \dfrac{\epsilon}{ 2\|G_1\|_{\infty} v_0}}=\dfrac{\epsilon}{2}, \qquad (\forall T\geq T_1).
\end{array}$$\
So, $II_1\in PAP_0({\mathbb{R}},{\mathbb{R}})$.\
Finally, we study the ergodicity of III(.). We have
$$\begin{array}{ll}
{\displaystyle \dfrac{1}{2T}\int^T_{-T} |III(t)|dt}& {\displaystyle \leq \dfrac{1}{2T} \int^T_{-T} \int ^t _{-\infty} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}\\
\\&\leq III_1(T) + III_2(T),
\end{array}$$ where $$III_1(T)={\displaystyle \dfrac{1}{2T} \int^T_{-T} \int ^t _{-T} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}$$ and $$III_2(T)={\displaystyle \dfrac{1}{2T} \int^{-T}_{-\infty} \int ^t _{-\infty} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}.$$
Next, we prove that $$\underset{T\rightarrow +\infty}{lim} III_1(T)=\underset{T\rightarrow +\infty}{lim}III_2(T)=0.$$ By the Fubini’s theorem, we obtain $$\begin{array}{ll}
III_1(T)&{\displaystyle = \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^T_{-T} \mid G_2(t-u) \mid dt du}\\
\\&{\displaystyle= \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^{T-u}_{-T-u} \mid G_2(t) \mid dt du}\\
\\&{\displaystyle \leq \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^{T+u}_{-(T+u)} \mid G_2(t) \mid dt du.}
\end{array}$$
Now, since $G_2 \in PAP_0(\mathbb{R},\mathbb{R})$, then the function $\Psi_T$ defined by $${\displaystyle \Psi_T(u)=\dfrac{T+u}{T} \dfrac{1}{2(T+u)} \int^{T+u}_{-(T+u)} \mid G_2(t) \mid dt}$$
is bounded and satisfy $\underset{T\longrightarrow +\infty}{lim} \Psi_T(u)=0$. From the Lebesgue dominated convergence theorem, we obtain $$\underset{T\rightarrow +\infty}{lim}III_1(T)=0.$$
On the other hand, notice that $\|G_2\|_\infty<0$ and by setting $\xi=t-s$ we obtain $$\begin{array}{ll}
III_2(T)&{\displaystyle \leq \dfrac{\|G_2\|_{\infty} }{2T} \int^T_{-T} \int ^{+\infty}_{2T} e^{- a^- \xi} d\xi dt}\\
\\&{\displaystyle =\dfrac{\|G_2\|_{\infty} }{a^-} e^{-2 a^- T}}\qquad \underset{T\longrightarrow +\infty }{\longrightarrow 0}.
\end{array}$$ which implies the required result.\
\
Step 3:\
Let $$\begin{aligned}
\gamma : [0,1]&\longrightarrow& \mathbb{R}\\
u&\longmapsto& {\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\bigg[\sum^N_{i=1}b_i(t) \bigg(\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg)M^{m-1}+L\bigg]e^u\bigg\}.}\end{aligned}$$ It is clear that $\gamma$ is continuous function on \[0,1\].\
From $\textbf{(H4)}$ : $\gamma$(0)<0, so $\exists \zeta \in [0,1]$ such that $$\gamma(\zeta)<0\qquad (C.8).$$
Next, we claim that $ \Gamma(\psi)(t) \in [k,M]$ for all $t\in {\mathbb{R}}.$\
For $\psi \in \mathcal{B}$, we have $$\begin{array}{lll}
\bullet \text{ }\Gamma(\psi)(t)&\leq{\displaystyle \int ^t_{-\infty} e^{-(t-s) a^-} \left[ \sum^{N}_{i=1} b_i^+ -H^-\right]ds} \qquad(\text{By \textbf{(C.1)}})\\
\\&\leq{\displaystyle \int ^t_{-\infty} e^{a^-(t-s)} a^- M ds} \qquad ({\text{By \textbf{(H2)}} )}\\
\\&=M.\\
\\ \bullet \text{ }\Gamma(\psi)(t)&\geq {\displaystyle \int^t_{-\infty} e^{-(t-s)a^+}\left[ \sum^{N}_{i=1} \dfrac{b_i^- k^m}{1+k^n} -H^+\right] ds} \quad (\text{By \textbf{(C.3)} and \textbf{(C.5)}})\\
\\ &\geq {\displaystyle \int ^t_{-\infty} e^{-(t-s)a^+} a^+ k ds} \qquad (\text{By \textbf{(H3)}} )\\
\\&=k.
\end{array}$$ Thus $\Gamma$ a self-mapping from $\mathcal{B}$ to $\mathcal{B}$.\
\
$\ast$ $\Gamma$ is a contraction. Indeed; Let $\varphi ,\psi \in \mathcal{B} $, we have $$\begin{aligned}
\Vert \Gamma(\varphi)-\Gamma(\psi)\Vert _\infty &=&\underset{t\in {\mathbb{R}}}{sup}|\Gamma(\phi)(t)-\Gamma(\psi)(t)|\\
\\ &\leq &{\displaystyle \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \sum^{N}_{i=1} b_i(s) \bigg| \dfrac{\varphi^m(s-\tau_i(s))}{1 + \varphi^n(s-\tau_i(s))}-\dfrac{\psi^m(s-\tau_i(s))}{1 + \psi^n(s-\tau_i(s))}\bigg| }\\
\\&&+\bigg|H(s,\varphi(s-\sigma(s))-H(s,\psi(s-\sigma(s))\bigg|ds.\end{aligned}$$ By the mean value theorem, one can obtain $$\begin{array}{lll}
\bigg|\dfrac{x^m}{1+x^n}- \dfrac{y^m}{1+y^n}\bigg|&=|g'(\theta)| |x-y| \qquad \qquad\text{\qquad \text{\qquad }where $\bigg[g : t\in {\mathbb{R}}^+ \longrightarrow \dfrac{t^m}{1+t^n}\bigg]$}\\ \\
&=\bigg|\dfrac{\theta^{m-1+n}(m-n)+m\theta^{m-1}}{(1 + \theta^n)^2}\bigg| |x-y|\\
\\&\leq \bigg[\dfrac{\theta^{m-1}(n-m)}{4 } +\dfrac{m\theta^{m-1}}{(1+\theta^n)^2}\bigg]|x-y|,\\
\end{array}$$ where $x,y \in [k,M]$, $\theta$ lies between $x$ and $y$.\
Consequently, the following estimate hold $$\begin{array}{lll}
{\displaystyle \Vert \Gamma(\varphi)-\Gamma(\psi)\Vert_\infty}& \leq {\displaystyle \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \bigg(\sum^{N}_{i=1} b_i(s) \bigg[ \dfrac{(n-m)M^{m-1}}{4}+\dfrac{m M^{m-1} }{(1+k^n)^2}\bigg]}\\
\\& \bigg| \varphi(s-\tau_i(s))-\psi(s-\tau_i(s))\bigg| +L \parallel \varphi-\psi \parallel_\infty\bigg) ds\\
\\& {\displaystyle \leq \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \bigg(\sum^{N}_{i=1} b_i(s) \bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg)}\\
\\& {\displaystyle \times \parallel \varphi-\psi \parallel _\infty ds}\\
\\&\leq {\displaystyle \parallel \varphi-\psi \parallel _\infty \underset{t \in \mathbb{R}}{sup} \int ^t_{-\infty} e^{-\int^t_s a(u)du}a(s) e^{-\zeta}ds} \qquad (\text{By \textbf{(C.8)}})\\
\\& \leq {\displaystyle e^{-\zeta} \parallel \varphi-\psi \parallel_\infty},\end{array}$$
which proves that $\Gamma$ is a contracting operator on ${\displaystyle \mathcal{B}}$. By using fixed point theorem, we obtain that operator $\Gamma$ has a unique fixed point ${\displaystyle x^* (.)\in \mathcal{B}}$, which corresponds to unique p.a.p solution of the equation (4).
The stability of the pap solution {#sec:4}
=================================
[@I] Let $f : {\mathbb{R}}\longrightarrow {\mathbb{R}}$ be a continuous function, then the upper right derivative of $f$ is defined as $$D^+f(t)= \overline{\underset{h\rightarrow0^+}{lim}}\dfrac{f(t + h) - f(t)}{h}.$$
We say that a solution $x^*$ of Eq. (4) is a global attractor or globally asymptotically stable (GAS) if for any positive solution $x_t(t_0,\phi)$ $$\underset{t\rightarrow +\infty}{lim}|x^*(t)-x_t(t_0,\phi)|=0.$$
Under the assumptions **H(1)-H(4)**, the positive pseudo almost periodic solution $x^*(.)$ of the equation (4) is a global attractor.
Firstly, set $x_t(t_0,\phi)$ for $\phi \in \mathcal{C}^0$ by $x(t)$ for all $t \in {\mathbb{R}}$. Let $$y(.)=x(.)-x^*(.).$$Then, $$\begin{aligned}
y'(t) &=& -a(t)[x(t)-x^*(t)]+\sum^{N}_{i=1} b_i(t)\bigg[\dfrac{x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}-\dfrac{x^*{^m}(t-\tau_i(t))}{1 + x^*{^n}(t-\tau_i(t))}\bigg] \\
\\ &&-\bigg [H(t,x(t-\sigma(t)))-H(t,x^*(t-\sigma(t)))\bigg].
\end{aligned}$$ Let us define a continuous function $\Delta :\mathbb{R^+}\longrightarrow \mathbb{R}$ by $$\Delta(u)= u-a^-+\bigg[L_{H}+\sum_{i=1}^{N} b^+_i\bigg(\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg)M^{m-1}\bigg]\exp(ut).$$
From $\textbf{(H4)}$, we have $\Delta (0)<0$, then there exists $\lambda\in \mathbb{R}^+$, such that$$\Delta(\lambda)<0 \qquad (C.11).$$
We consider the Lyapunov functional $V(t)=|y(t)|e^{\lambda t}$. Calculating the upper right derivative of $V(t)$, we obtain $$\begin{aligned}
D^+V(t) &\leq & \bigg[-a(t)|y(t)|+\sum_{i=1}^{N} b_i(t)\bigg|\frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))} -\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg|+ \bigg|H(t,x^*(t-\sigma(t)))\\
\\&&-H(t,x(t-\sigma(t)))\bigg|\bigg]e^{\lambda t}+\lambda|y(t)|e^{\lambda t}\\
\\&\leq & e^{\lambda t}\bigg((\lambda-a^-)|y(t)|+L_H|y(t-\sigma(t))|+\sum_{i=1}^{N} b^+_i \bigg|\frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg|\bigg).
\end{aligned}$$ We claim that $$\begin{aligned}
V(t) &=& |y(t)|e^{\lambda t}< e^{\lambda t_0}(M+\max_{t<t_0}|x(t)-x^*(t)|)=M_1, \forall t\geq t_0.
\end{aligned}$$ Suppose that there exists $t_1>t_0$ such that $$\begin{aligned}
V(t_1) &=& M_1, V(t)<M_1, \forall \text{ } t_0-r\leq t< t_1.
\end{aligned}$$ Besides, $$\begin{aligned}
0\leq D^+V(t_1)&\leq& (\lambda-a^-)|y(t_1)|e^{\lambda t_1}+L_H|y(t_1-\sigma(t_1))|e^{\lambda (t_1-\sigma(t_1))}e^{\sigma(t_1) \lambda}\\
\\&&+\sum_{i=1}^{N} {b_i}^+ e^{\lambda t_1}\bigg[\frac{x^m(t_1-\tau_i(t_1))}{1+x^n(t_1-\tau_i(t_1))}-\frac{(x^*)^m(t_1-\tau_i(t_1))}{1+(x^*)^n(t_1-\tau_i(t_1))}\bigg].
\end{aligned}$$
On the other hand, for all $x,x^* \in {\mathbb{R}}^+$ we have
$$\bigg|\dfrac{x^m}{1+x^n}-\dfrac{(x^*)^m}{1+(x^*)^n}\bigg| \leq \bigg[\dfrac{\zeta^{m-1}(n-m)}{4 } +\dfrac{m\zeta^{m-1}}{(1+\zeta^n)^2}\bigg] |x-x^*|,$$
where $\zeta \in [k,M]$.\
So, we obtain $$\begin{aligned}
\bigg |\dfrac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg | \leq \bigg[\dfrac{M^{m-1}(n-m)}{4 } +\dfrac{M^{m-1}}{(1+k^n)^2}\bigg] |y(t-\tau_i(t))|.
\end{aligned}$$
Then, $$\begin{aligned}
0&\leq& D^+V(t_1)\\
\\&\leq& (\lambda-a^-)|y(t_1)|e^{\lambda t_1}+L_H|y(t_1-\sigma(t_1))|e^{\lambda (t_1-\sigma(t_1))}e^{\sigma(t_1)\lambda}\\
\\ &&+\sum_{i=1}^{N} \bar{b_i}e^{\lambda (t_1-\tau_i(t_1)) }e^{\tau_i(t_1) \lambda}|y(t_1-\tau_i(t_1))|\bigg[\dfrac{M^{m-1}(n-m)}{4 } +\dfrac{m M^{m-1}}{(1+k^n)^2}\bigg]\\
\\&\leq& \bigg(\lambda-a^-+L_H e^{r\lambda}+\sum_{i=1}^{N} b^+_i e^{r \lambda}\bigg[ \dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1} \bigg)M_1.
\end{aligned}$$
However, by **(C.11)** $$\begin{aligned}
\lambda-a^-+L_H e^{r\lambda}+\sum_{i=1}^{N} b^+_i e^{r \lambda}\bigg [ \dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1} <0,
\end{aligned}$$ which is contradicts the hypothesis. Consequently, $|y(t)|< M_1 e^{-\lambda t},\text{ } \forall t> t_0$.
An example {#sec:5}
==========
In this section, we present an example to check the validity of our theoretical results obtained in the previous sections.\
\
First, we construct a function $\omega(t)$. For $n = 1, 2,...$ and $0 \leq i < n$, $$a_n=\dfrac{n^3-n}{3}$$ and intervals $$I_n^i = [a_n + i, a_n + i + 1].$$ Choose a non-negative, continuous function g on \[0,1\] defined by $$g(s) =\dfrac{8}{\pi}\sqrt{s-s^2}.$$ Define the function $\omega$ on ${\mathbb{R}}$ by $$\omega(t)=
\left \{
\begin{array}{lll}
g[t-(a_n+i)], &\qquad t \in I_n^i,\\
0, &\qquad t\in {\mathbb{R}}^+ \ \cup \{I_n^i: n=1,2,...,0\leq i \leq n \}, \\
\omega(-t), &\qquad t<0.
\end{array}
\right.$$\
From , we know that $\omega \in PAP_0( {\mathbb{R}}, {\mathbb{R}})$ is ergodic. However, $\omega \notin C_0( {\mathbb{R}}, {\mathbb{R}})$.

Let us consider the case $n=m=2$ , $N=1$,\
\
$a(t)=0.38+ \dfrac{|sin(t)+sin (\pi t)|}{400} + \dfrac{ \pi\omega(t)}{800}$, $b_1(t) =1+\dfrac { sin^2( t) + sin^2(\sqrt{2}t)}{10}+ \dfrac{1}{100(1+t^2)},$\
\
$ \tau_1(t) = cos^2(t) + cos^2 (\sqrt{2}t) + 1 +e^{-t^2}, \text{}H(t,x)=0.01|sin(t)+cos(\sqrt{3}t)| \dfrac{|x|}{1+x^2},\text{ et }\sigma(t)=|sin(t)-sin(\pi t)|.$\
\
Clearly, $a^+=0.39, a^-=0.38, b^+=1.21, b^-=1, H^+=5*10^{-3}, H^-=0, r=4.$\
\
It is not difficult to see that H $\in PAP_U({\mathbb{R}}\times {\mathbb{R}},{\mathbb{R}}^+)$ and satisfies Lipschitz condition with $l=10^{-2}$.\
\
Let k=2, M=3.29. We obtain easily:
1. $0<2<3.29;$
2. ${\displaystyle -a^- M+\sum^N_{i=1}b_i^+ - H^-}= -0.04 < 0$;
3. ${\displaystyle -a^+ k+\sum^N_{i=1}b_i^- \dfrac{k^m}{1+k^n} -H^+}= 0.015> 0$;
4. ${\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\sum^N_{i=1}b_i(t)\bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg\}}$= -0.0515<0.
Then, all the conditions in Theorem [1]{} et [2]{} are satisfied, Therefore, there exists a unique pseudo almost periodic solution $x^*$ in $\mathcal{B}=\{\phi \in PAP({\mathbb{R}},{\mathbb{R}}); k\leq \phi (t) \leq M,\text{} \forall t\in {\mathbb{R}}\}$ which is global attractor.\

Notice that in vue of this above example, it follows that the condition of proposition 4.2 is necessary. Besides, The results are verified by the numerical simulations in fig(2).
Conclusion {#sec:6}
==========
In this paper, some new conditions were given ensuring the existence of the uniqueness positive pseudo almost periodic solution of the hematopoies model with mixed delays and with a non-linear harvesting term (which is more realistic).\
Also, the global attractivity of the unique pseudo almost periodic solution of the considered model is demonstrated by a new and suitable Lyapunov function.\
\
As the best of our knowledge, this is the first paper considering such solutions for this generalized model.\
Notice that our approach can be applied to the case of the almost automorphic and pseudo almost automorphic solutions of the considered model.
[100]{} Liu B., New results on the positive almost periodic solutions for a model of hematopoiesis, Nonlinear Anal. Real World Appl. 17(2014), 252-264. Zhang C.. Almost Periodic Type Functions and Ergodicity. Kluwer Academic/Science Press: Beijing, 2003. Zhang C., Pseudo almost periodic solutions of some diffe rential equations II, J. Math. Anal. Appl., 192(1995), pp. 543-561. Braverman E., Saker S.H., Permanence, oscillation and attractivity of the discrete hematopoiesis model with variable coefficients, Nonlinear Anal. Theory Methods Appl. 67 (2007) 2955–2965. Chérif F., Analysis of Global Asymptotic Stability and Pseudo Almost Periodic Solution of a Class of Chaotic Neural Networks, Mathematical Modelling and Analysis, 18:4, 489-504. Chérif F., Pseudo almost periodic solution of Nicholson’s blowflies model with mixed delays, Applied Mathematical Modelling 39 (2015) 5152–5163. Long F. and Yang M. Q. , “Positive periodic solutions of delayed Nicholsons blowies model with a linear harvesting term,” Electronic Journal of Qualitative Thory of Diffrential Equations, vol. 41, pp. 1–11, 2011. Liu G., Yan J., Zhang F. : Existence and global attractivity of unique positive periodic solution for a model of hematopoiesis. J. Math. Anal. Appl. 334, 157-171 (2007). Zhou H., Wang W., Zhou Z.F., Positive almost periodic solution for a model of hematopoiesis with infinite time delays and a nonlinear harvesting term, Abstr. Appl. Anal. 2013 (2013) 146729. Zhou H., Wang J., and Zhou Z., “Positive almost periodic solution for impulsive Nicholson’s blowfles model with multiple linear harvesting terms,” Mathematical Methods in the Applied Sciences, vol. 36, no. 4, pp. 456–461, 2013. Ding H.S., Liu Q.L., Nieto J.J.: Existence of positive almost periodic solutions to a class of hematopoiesis model. Appl. Math. Model. 40, 3289-3297 (2016). Ding H.S., N’Guérékata H.M., Nieto J.J., Weighted pseudo almost periodic solutions for a class of discrete hematopoiesis model, Rev. Mat. Complut. 26 (2013)427–443. Blot J. B., Blot J., N’Guérékata G. M., Pennequin D. On C(n)- Almost Periodic Solutions to Some Nonautonomous Differential Equations in Banach Spaces, annales societatis mathematycae polonae, Series I: commentationes mathematicae roczniki polskiego towarzystwa mathematycznego, Seria I: prace mathematyczne XLVI(2)(2006), 263-273. Meng J., Global exponential stability of positive pseudo-almost-periodic solutions for a model of hematopoiesis, Abstract and Applied Analysis, 2013, Art. ID 463076. Hale JK, Verduyn Lunel SM. Introduction to Functional Differential Equations. Springer-Verlag: New York, 1993. Alzabut J.O., Nieto J.J., Stamov G.Tr., Existence and exponential stability of positive almost periodic solutions for a model of hematopoiesis, Bound. Value Probl. 1193 (2009) 429–436. Mackey M.C, Glass L., Oscillation and chaos in physiological control system, Science 197 (1977) 287-289. Amdouni M., Chérif F., The pseudo almost periodic solutions of the new class of Lotka-Volterra recurrent neural networks with mixed delays,Chaos, Solitons and Fractals 113 (2018) 79–88. Cieutat P., Fatajou S. et N’Guérékata G.M., Composition of pseudo almost periodic and pseudo almost automorphic functions and applications to evolution equations. Annales mathématiques Blaise Pascal, tome 6(1999), p. 1-11. Rihani S., Amor K., Chérif F., Pseudo almost periodic solutions for a Lasota-Wazewska model, Electronic Journal of Differential Equations, Vol. 2016 (2016), No. 62, pp. 1-17. Saker S.H., Oscillation and global attractivity in hematopoiesis model with periodic coefficients, Appl. Math. Comput. 142 (2003) 477–494. Diagana T., Pseudo Almost Periodic Functions in Banach Spaces, Nova Science, New York, 2007. Diagana T., Zhou H., Existence of positive almost periodic solutions to the hematopoiesis model. Applied Mathematics and Computation 274 (2016) 644–648. Chen X., Hui-Sheng, Ding, Positive Pseudo Almost Periodic Solutions for a Hemathopoies Model, Journal of Nonlinear Evolution Equations and Applications, Volume 2016, Number 2, pp. 25–36 (September 2016) Wang X., Zhang H., A new approach to the existence, nonexistence and uniqueness of positive almost periodic solution for a model of hematopoiesis, Nonlinear Anal. Real World Appl. 11 (1) (2010) 60–66. Wu X., Li J., Zhou H., A necessary and sufficient condition for the existence of positive periodic solutions of a model of hematopoiesis, Comput. Math. Appl. 54 (6) (2007) 840–849.
|
---
abstract: 'In the present work, we adopt a relativistic constituent quark model to depict the charmed strange meson spectroscopy, in which $D_{s0}(2317)$ and $D_{s1}(2460)$ are considered as the $1^3P_0$ and $1P_1^\prime$ charmed strange mesons, respectively. By using the wave function obtained from the relativistic quark model, we further investigate the electric transitions between charmed strange mesons. We find the long wave length approximation is reasonable for the charmed strange meson radiative decay by comparing the results with different approximations. The estimated partial widths are all safely under the upper limits of the experimental data. Moreover, we find the branching ratio of $D_{s1}(2536) \to D_s^\ast \gamma/D_s \gamma$ are large enough to be detected, which could be searched by further experiments in Belle II and LHCb.'
author:
- 'Shao-Feng Chen'
- Jing Liu
- 'Hai-Qing Zhou'
- 'Dian-Yong Chen [^1]'
title: 'Electric transitions of the charmed-strange mesons in a relativistic quark model'
---
Introduction
============
Charmed-strange meson is one of important members of meson family. The ground $S-$wave states, $D_s$ and $D_s^\ast$, were first observed more than 40 years ago in the $e^+e^-$ annihilation process by DASP Collaboration [@Brandelik:1977fg][^2]. Later, in the $\bar \nu N$ collisions, a new state, $D_{s1}(2536)$, was observed in the $D_s^\ast \gamma$ invariant mass spectrum [@Asratian:1987rb], which could be a ground $P-$wave state. The second $P-$ wave states $D_{s2}^\ast(2537)$ was observed in the $D K$ and $D^\ast K$ modes in the $B$ meson decay processes by CLEO Collabortaion [@Kubota:1994gn].
Nearly ten years later after the observation of $D_{s2}(2537)$, the rest two ground $P-$wave state candidates, $D_{s0}^\ast (2317)$ and $D_{s1}(2460)$, were discovered [@Aubert:2003fg; @Krokovny:2003zq]. The former one was first observed in the $D_s \pi^0$ invariant mass spectrum of $B$ decay process by BaBar Collaboration [@Aubert:2003fg] and the later one was reported in a similar process but in the $D_s^\ast \pi^0$ invariant mass spectrum by Belle Collaboration [@Krokovny:2003zq]. These two states are particular interesting since their observed masses are much lower than the quark model expectation [@Godfrey:1985xj] and several tens MeV below the threshold of $DK$ and $D^\ast K$, respectively. Thus, these two states were ever considered as $DK$ and $D^\ast K$ molecular states due to their particular properties [@Xie:2010zza; @Zhang:2006ix; @Bicudo:2004dx; @Faessler:2007gv; @Faessler:2007us; @Cleven:2014oka; @Datta:2003re; @Xiao:2016hoa]. However, considering the coupled channel effects and the fact that there are no additional states around quark model predicted masses, the authors in Refs. [@Bracco:2005kt; @Lutz:2008zz; @Hwang:2004cd; @Liu:2006jx; @Lu:2006ry; @Liu:2013maa; @Wang:2006mf; @Fajfer:2015zma; @Song:2015nia] assigned these two states as $P-$ wave charmed-strange mesons. In this case, the ground $P-$wave charmed strange mesons are established.
In 2006, the BaBar Collaboration reported two new charmed-strange meson in the $D K$ invariant mass spectrum of $B$ meson decay [@Aubert:2006mh]. The narrow one is $D_{sJ}(2860)$ and the broader one is $D_{s1}^\ast(2700)$. The theoretical estimation indicate that the $D_{s1}^\ast (2700) $ could be a good candidate of $2^3S_1$ state [@Wang:2009as; @Colangelo:2007ds; @Zhang:2006yj; @Song:2015nia]. In 2014, the LHCb Collaboration analyzed the $\bar{D} K$ invariant mass spectrum of $B_s^0 \to \bar{D} K^- \pi^+ $ process and find the structure around 2860 MeV announced by BaBar Collaboration consist of two particle with spin-1 and spin-3 [@Aaij:2014xza], which were named as $D_{s1}^\ast (2860)$ and $D_{s3}^\ast(2860)$. As indicated in Refs. [@Song:2014mha; @Wang:2014jua; @Godfrey:2014fga; @Ke:2014ega], these two states could be good candidates of $D-$wave charmed-strange mesons $1^3D_1$ and $1^3D_3$, respectively. To data, the observed heaviest charmed strange meson is $D_{sJ}(3040)$, which was discovered in $D^\ast K$ invariant mass spectrum of $B$ decay process by BaBar Collaboration [@Aubert:2009ah], which can be assigned as $2P_1$ state as indicated in Refs. [@Song:2015nia; @Godfrey:2015dva].
In Fig. \[Fig:Discovery\], we present the history of the observation of charmed-strange mesons, where we find most excited charmed-strange mesons were observed during the year of 2003-2014. Moreover, from the discovery history one can find most of the charmed-strange mesons are firstly observed in the bottom or bottom-strange meson decays. With the running of Belle II and LHCb, more excited charmed strange meson are expected to be discovered in the bottom or bottom-strange meson decays, which will make the charm-strange family abundant.
Besides the observed resonance parameters, i.e., the mass and width, the decay behaviors of the observed states are also crucial to understand their inner structures. In particular, the electromagnetic transitions can be well described by Quantum electrodynamics in the quark level, which is unlike the non-perturbative strong interactions in the hadron energy. Thus, the electromagnetic transitions could reflect the inner structure in a more comprehensive manner. On the experimental side, there are some experimental measurements for the radiative transitions between charmed mesons, the corresponding experimental information are collected in Table \[Tab:Exp\]. Thus the investigation of the radiative decays of charmed strange mesons are interesting and necessary.
[p[2.0cm]{}< p[2.0cm]{}< p[3.0cm]{}<]{} Initial & Final & Experiments [@Tanabashi:2018oca]\
$D_s^\ast$ & $D_s$ & $(93.5\pm 0.7)\%$\
$D_{s0}(2317)$ & $D_s$ & $<5\%$\
& $D_s^\ast $ & $<6\%$\
$D_{s1}(2460)$ & $D_s$ & $(18 \pm 4)\%$\
& $D_s^\ast $ & $<8\%$\
& $D_{s1}(2317)$ &$(3.7^{+5.0}_{-2.4})\%$\
$D_{s1}(2536)$ & $D_s^\ast $ & Possibly Seen\
In the present work, the charmed meson spectroscopy is depicted by a relativistic quark model [@Liu:2013maa], where the masses of the charmed strange mesons are well reproduced. With the wave functions estimated in the quark model, we estimate the electric transitions between the charmed-strange mesons, which could not only further test the relativistic quark model by comparing the theoretical estimations with the experimental measurements but also provide some useful predictions.
This work is organized as follows, in Section \[Sec:Spec\], we present a short review of the relativistic quark model, by which the mass spectroscopy of charmed strange mesons are estimated. In Section \[Sec:Rad\], we present the formula of the electric transitions between charmed-strange mesons, and the numerical results and discussions are presented in Section \[Sec:Num\]. A short summary is given in Section \[Sec:Sum\].
Review of MASS SPECTROSCOPY of charmed strange mesons {#Sec:Spec}
======================================================
Relativistic quark model is usually adopted to depict the mass spectroscopy of hadrons since the non-perturbative properties of QCD in hadron energy. Such kind of quark model was proposed to investigate the meson spectroscopies systematically in 1985 by Godfrey and Isgur [@Godfrey:1985xj]. In this model, the mass spectroscopy and wave functions of the mesons can be determined by solving the relativistic Schrödinger equation, where the spin independent Hamiltonian can be, $$\begin{aligned}
H_0 =\sqrt{p^2+m_1^2}+\sqrt{p^2+m_2^2}+V(r)\end{aligned}$$ where $r$, $p$ are the coordinates and the momentum of quark in the center-of-mass frame, respectively. $m_1$ and $m_2$ are the masses of the quark and the antiquark, respectively. $V(r)$ is the effective spin-independent potential between the quark and the antiquark, including a Coulomb term and a linear confining term [@Godfrey:1985xj], which is $$\begin{aligned}
V(r)=-\frac{4 \alpha_{s}\left( r \right)}{3 r}+b r+c\end{aligned}$$ As for the spin-dependent part $H'$, it includes the spin-spin interaction and spin-orbital interactions, which is, $$\begin{aligned}
H'=H_{SS}+H_{SL}\end{aligned}$$ and the concrete form of spin-spin and spin-orbital interactions are $$\begin{aligned}
H_{SS}&=&f(r)\vec{s_{1}}\cdot\vec{s_{2}}+g(r)\left(\frac{3\vec{s_{1}}\cdot\vec{r}\vec{s_{2}}\cdot\vec{r}}{r^{2}}-\vec{s_{1}}\cdot\vec{s_{2}}\right) \nonumber\\
H_{SL}&=&h_{1}(r)\vec{s_{1}}\cdot\vec{L}+h_{2}(r)\vec{s_{2}}\cdot\vec{L} \end{aligned}$$ where the functions $f(r),g(r),h_{1}(r),h_{2}(r)$ can be found in Ref. [@Liu:2013maa]. With these spin dependent terms, the $S$-$D$ mixings and spin-singlet and spin-triplet mixings have been included. In this model, the mass spectroscopy of the charmed strange mesons can be well reproduced, thus, in the present work, we adopt the same model parameters as those in Ref. [@Liu:2013maa] to investigate the electric radiative decays of the charmed strange mesons. Before the estimations of the radiative decays, we present the mass spectroscopy of the charmed-strange mesons in Table \[Tab:Mass\], where the theoretical estimations from Ref. [@Godfrey:2015dva; @Devlani:2011zz] and experimental data [@Tanabashi:2018oca] are also listed for comparison. The theoretical estimated mass of $1^3P_0$ and $1P_1$ states are 2317 and 2425 MeV, respectively, which are more consistent with the experimental measurements comparing to other works [@Godfrey:2015dva; @Devlani:2011zz].
[c|p[1.0cm]{}< p[1.25cm]{}< p[1.25cm]{}< p[1.25cm]{}< p[2cm]{}<]{} & Present & Ref [@Godfrey:2015dva] & Ref [@Devlani:2011zz] & PDG [@Tanabashi:2018oca]\
& $1^1S_0$ & $1964$ & $1979$ & $1970$ & $1968.34\pm 0.07$\
& $1^3S_1$ & $2102$ & $2129$ & $2117$ & $2112.2\pm 0.4$\
& $2^1S_0$ & $2557$ & $ 2673$ & $2684$ &\
& $2^3S_1$ & $2680$ & $2732$ & $2723$ & $2708.3^{+4.0}_{-3.4}$\
& $3^1S_0$ & $2999$ &$ 3154 $ & $3158$ &\
& $3^3S_1$ & $3105$ & $3193$ & $3180$ &\
& $1^3P_0$ & $2317$ & $2484$ & $2444$ & $2317.8\pm 0.5$\
& $1 P_1$ & $2425$ & $2549$ & $2530$ & $2459.5\pm 0.6$\
& $1 P_1^\prime$ & $2510$ & $2556$ & $2540$ & $2535.11\pm 0.06$\
& $1^3P_2$ & $2548$ & $2592$ & $2566$ & $2569.1\pm 0.8$\
& $2^3P_0$ & $2700$ &$3005$ & $2947$\
& $2 P_1^\prime$ & $2876$ & $3018$ & $3019$\
& $2 P_1$ & $2965$ & $3038$ & $3023$ & $3044\pm 8^{+30}_{-5}$\
& $2^3P_2$ & $3019$ & $3048$ & $3048$\
& $1^3D_1$ & $2771$ & $2899$ & $2873$ & $2859\pm 27$\
& $1 D_2$ & $2800$ & $2900$ & $2816$\
& $1 D_2^\prime$ & $2826$ & $2926$ & $2896$\
& $1^3D_3$ & $2816$ & $2917$ & $2834$ & $2860.1\pm 7$\
& $2^3D_1$ & $3138$ & $3306$ & $3292$\
& $2 D_2$ & $3191$ & $3323$ & $3312$\
& $2 D_2^\prime$ & $3186$ & $3298$ & $3248$\
& $2^3D_3$ & $3214$ & $3311$ & $3263$\
Electric transitions of the charmed strange mesons {#Sec:Rad}
==================================================
In the quark level, the quark-photon electromagnetic interaction can be written as $$\begin{aligned}
H_{e}=-\sum_{j}e_{j}\bar{\psi}_{j}\gamma_{\mu}\mathcal{A}^{\mu}(\textbf{k},\textbf{r})\psi_{j}
\label{Eq:He}\end{aligned}$$ where $\psi_{j}$ and $e_{j}$ represent the $j-th$ quark fields and its charges in the charmed-strange meson, respectively. The $\textbf{k}$ is three momentum of the emitted photon. After performing some algebra estimation as shown in Appendix \[Sec:App\], the amplitude of the electromagnetic transition can be expressed, $$\begin{aligned}
\left\langle f\left| H_e \right|i \right\rangle &=& \left\langle f\left| \alpha \cdot \epsilon e^{i \mathbf{k}\cdot \mathbf{r}_j} \right| i\right\rangle\nonumber\\
&=& -i\omega \left\langle f\left| r_j \cdot \epsilon e^{i \mathbf{k}\cdot \mathbf{r}_j} (1-\alpha \cdot \hat{k})\right|i \right\rangle
\label{Eq:EM}\end{aligned}$$ where $|i\rangle$ and $|f\rangle$ are the wave functions of initial and final states, respectively. $\omega$ is the energy of the emitted photon.
In the present work, we mainly focus on the electric transition processes, and the helicity amplitude is $$\begin{aligned}
\mathcal{A}^{E}_{\lambda}=-i\sqrt{\frac{\omega}{2}}\langle f|\sum_{j}e_{j}\textbf{r}_{j}\cdot\boldsymbol{\epsilon}e^{-i \mathbf{k} \cdot \mathbf{r}_{j}}| i\rangle\end{aligned}$$ where the initial and final hadron wave functions can be estimated by the relativistic quark model. In the estimation, we can choose the photon momentum direction along the $z$ axis, i.e., $\textbf{k}=k\hat{\textbf{z}}$, and the photon polarization vector is in right-hand form, which is $\boldsymbol{\epsilon}=-(1,i,0)/\sqrt{2}$. In this case, $e^{-i \mathbf{k} \cdot \mathbf{r}_{j}}$ can be expanded as, $$\begin{aligned}
e^{-i \mathbf{k} \cdot \mathbf{r}_{j}}=\sum _{l}\sqrt{4\pi (2l+1)}(-i)^{l}j_{l}(kr_j)Y_{l0}(\Omega),\end{aligned}$$ then the helicity amplitude for the angular momentum $l$ can be [@Deng:2016stx], $$\begin{aligned}
\mathcal{A}^{E}_{l,\lambda}=\sqrt{\frac{\omega}{2}}\langle f|\sum_{j}(-i)^{l}\sqrt{\frac{2\pi l(l+1)}{2l+1}}e_{j}j_{l+1}(kr_j)r_{j}Y_{l1}| i\rangle\\ \nonumber
+ \sqrt{\frac{\omega}{2}}\langle f|\sum_{j}(-i)^{l}\sqrt{\frac{2\pi l(l+1)}{2l+1}}e_{j}j_{l-1}(kr_j)r_{j}Y_{l1}| i\rangle,\end{aligned}$$ and then the decay width of the electric transition between $Q\bar{q}$ can be estimated as $$\begin{aligned}
\Gamma(A\rightarrow B \gamma)&=&\sum_{k=0,2} \frac{4\alpha}{3}\omega^3 C_{fi}\delta_{SS'}\delta_{LL'\pm1}\nonumber\\
&&\Big|\Big\langle n'^{2S'+1}L'_{J'}\Big|\frac{e_q m_Q r}{m_q+m_Q}j_k \left(\frac{m_2 \omega r}{m_q+m_Q}\right) \nonumber\\
&&-\frac{e_{\bar{Q}}m_{q}}{m_{q}+m_{Q}}j_k \left(\frac{m_1\omega r}{m_{q}+m_{Q}}\right)\Big|n^{2S+1}L_{J}\Big\rangle\Big|^{2}
\label{Eq:ModeI}\end{aligned}$$ where $|n'^{2S'+1}L'_{J'} \rangle$ and $|n^{2S+1}L_{J} \rangle$ represent the final and initial states, respectively. $C_{fi}$ is a coefficient related to involved states, which is $$\begin{aligned}
C_{fi}= \mathrm{max}(L_A,L_B)(2J_B+1)\left\{
\begin{array}{ccc}
L_B & J_B & S\\
J_A & L_A & 1
\end{array}
\right\}\end{aligned}$$ Considering the lowest order of the electric transition, the terms related to $j_2(kr)$ can be ignored, then the electric transition width can be $$\begin{aligned}
\Gamma(A\rightarrow B \gamma)&=& \frac{4\alpha}{3}\omega^3 C_{fi}\delta_{SS'}\delta_{LL'\pm1}\nonumber\\
&&\Big|\Big\langle n'^{2S'+1}L'_{J'}\Big|\frac{e_q m_Q r}{m_q+m_Q}j_0 \left(\frac{m_2 \omega r}{m_q+m_Q}\right) \nonumber\\
&&-\frac{e_{\bar{Q}}m_{q}}{m_{q}+m_{Q}}j_0 \left(\frac{m_1\omega r}{m_{q}+m_{Q}}\right)\Big|n^{2S+1}L_{J}\Big\rangle\Big|^{2}
\label{Eq:ModeII}\end{aligned}$$ In the literatures, the zeroth order spherical Bessel function $j_0(kr)$ is usually expanded as $j_0(kr)=1+\mathcal{O}(x^2)$, keeping the lowest order, one can get the partial width as $$\begin{aligned}
\Gamma(A\rightarrow B \gamma)=\frac{4\alpha}{3}e_{M}^{2}\omega^{3}C_{fi}\delta_{SS'}\delta_{LL'\pm1}|\langle n'^{2S'+1}L'_{J'}|r|n^{2S+1}L_{J}\rangle|^{2}\nonumber\\
\label{Eq:ModeIII}\end{aligned}$$ where $e_M=(e_{\bar{Q}} mq-e_q m_{Q})/(m_q+m_Q)$. The approximation in above formula corresponds to the long wave length approximation, where $e^{i \textbf{k}\cdot \textbf{r}} \sim 1$.
---------------------------- ----------------------------- --------- --------- ---------- ------------------------ ----------------------- ---------------------- --------------- --------------- ---------------
Mode I Mode II Mode III Ref. [@Radford:2009bs] Ref. [@Green:2016occ] Ref. [@Close:2005se]
$D_{s}(2^{1}S_{0})$ $D_{s1}(1P_{1})$ $0.07$ $0.07$ $0.07$ $0.01$ $0.05$ $3.35$ $3.3\pm 0.6$ $4.0 \pm 0.7$ $3.8 \pm 0.7$
$D_{s1}(1P_{1}^\prime)$ $0.18$ $0.18$ $0.18$ $0.57$ $4.6 \pm 1.2$ $4.3 \pm 1.1$ $4.3 \pm 1.1$
$D_{s}^{\ast}(2^{3}S_{1})$ $D_{s0}^{\ast}(1^{3}P_{0})$ $3.32$ $3.20$ $3.20$ $6.76$ $8.77$ $2.4 \pm 0.0$ $2.6 \pm 0.1$ $2.5 \pm 0.1$
$D_{s1}(1P_{1})$ $1.22$ $1.20$ $1.20$ $2.8$ $4.25$ $4.0 \pm 0.2$ $4.9 \pm 0.2$ $4.8 \pm 0.2$
$D_{s1}(1P_{1}^\prime)$ $0.30$ $0.30$ $0.30$ $0.24$ $0.41$ $2.1\pm 0.3$ $2.1 \pm 0.4$ $2.1 \pm 0.3$
$D_{s2}^{\ast}(1^{3}P_{2})$ $1.21$ $1.21$ $1.21$ $0.35$ $0.71$ $8.1 \pm 1.1$ $7.5 \pm 1.1$ $7.6 \pm 1.1$
$D_{s}(3^{1}S_{0})$ $D_{s1}(1P_{1})$ $1.58$ $1.60$ $1.60$
$D_{s1}(1P_{1}^\prime)$ $1.10$ $0.76$ $0.77$
$D_{s1}(2P_{1})$ $0.77$ $0.76$ $0.76$
$D_{s1}(2P_{1}^\prime)$ $0.05$ $0.05$ $0.05$
$D_{s}^{\ast}(3^{3}S_{1})$ $D_{s0}^{\ast}(1^{3}P_{0})$ $1.74$ $1.16$ $1.17$
$D_{s1}(1P_{1})$ $2.00$ $2.06$ $2.06$
$D_{s1}(1P_{1}^\prime)$ $0.39$ $0.28$ $0.28$
$D_{s2}^{\ast}(1^{3}P_{2})$ $0.99$ $0.65$ $0.66$
$D_{s0}^{\ast}(2^{3}P_{0})$ $14.06$ $13.04$ $13.05$
$D_{s1}(2P_{1})$ $3.91$ $3.80$ $3.80$
$D_{s1}(2P_{1}^\prime)$ $0.34$ $0.33$ $0.33$
$D_{s2}^{\ast}(2^{3}P_{2})$ $0.69$ $0.69$ $0.69$
---------------------------- ----------------------------- --------- --------- ---------- ------------------------ ----------------------- ---------------------- --------------- --------------- ---------------
----------------------------- ---------------------------- --------- --------- ---------- ------------------------ ----------------------- ----------------------- ------------------------ ---------------------- ---------------- ---------------------- ----------------------
Mode I Mode II Mode III Ref. [@Radford:2009bs] Ref. [@Green:2016occ] Ref. [@Korner:1992pz] Ref. [@Godfrey:2005ww] Ref. [@Close:2005se]
$D_{s0}^{\ast}(1^{3}P_{0})$ $D_{s}^{\ast}(1^{3}S_{1})$ $2.07$ $2.06$ $2.06$ $4.92$ $5.46$ $1.9$ $1.0$ $24.9\pm 1.9$ $14.5\pm 0.9$ $16.2 \pm 1.0$
$D_{s1}(1P_{1}^\prime)$ $D_{s}(1^{1}S_{0})$ $3.61$ $3.53$ $3.53$ $12.8$ $13.2$ $15.0$ $4.02$ $25.2 \pm 0.5$ $31.1 \pm 0.8$ $30.0 \pm 0.7$
$D_{s}^{\ast}(1^{3}S_{1})$ $4.79$ $4.74$ $4.74$ $15.5$ $17.4$ $5.6$ $4.41$ $14.6 \pm 0.2$ $22.8 \pm 1.2 $ $21.0 \pm 1.0$
$D_{s1}(1P_{1})$ $D_{s}(1^{1}S_{0})$ $18.85$ $18.18$ $18.18$ $54.5$ $61.2$ $1.6 \pm 2.3$ $6.2$ $4.53$ $17.2 \pm 0.7$ $10.3 \pm 0.6$ $11.4 \pm 0.6$
$D_{s}^{\ast}(1^{3}S_{1})$ $3.02$ $2.96$ $2.96$ $8.90$ $9.21$ $0.4 \pm 1.0$ $5.5$ $1.59$ $25.1 \pm 1.4$ $14.0 \pm 0.8 $ $15.8 \pm 0.9$
$D_{s2}^{\ast}(1^{3}P_{2})$ $D_{s}^{\ast}(1^{3}S_{1})$ $15.66$ $15.23$ $15.23$ $44.1$ $49.6$ $1.4 \pm 2.0$ $19.0$ $8.8$ $41.5 \pm 0.0$ $55.9^{+0.9}_{-0.6}$ $53.0^{+0.4}_{-0.5}$
$D_{s0}^{\ast}(2^{3}P_{0})$ $D_{s}^{\ast}(1^{3}S_{1})$ $0.03$ $0.03$ $0.03$
$D_{s}^{\ast}(2^{3}S_{1})$ $0.004$ $0.004$ $0.004$
$D_{s1}(2P_{1}^\prime)$ $D_{s}(1^{1}S_{0})$ $0.91$ $0.55$ $0.56$
$D_{s}^{\ast}(1^{3}S_{1})$ $2.45$ $1.78$ $1.79$
$D_{s}(2^{1}S_{0})$ $4.19$ $4.04$ $4.04$
$D_{s}^{\ast}(2^{3}S_{1})$ $3.15$ $3.11$ $3.11$
$D_{s1}(2P_{1})$ $D_{s}(1^{1}S_{0})$ $0.46$ $1.05$ $1.07$
$D_{s}^{\ast}(1^{3}S_{1})$ $0.05$ $0.13$ $0.13$
$D_{s}(2^{1}S_{0})$ $16.89$ $15.90$ $15.90$
$D_{s}^{\ast}(2^{3}S_{1})$ $2.45$ $2.38$ $2.38$
$D_{s2}^{\ast}(2^{3}P_{2})$ $D_{s}^{\ast}(1^{3}S_{1})$ $1.71$ $2.53$ $2.54$
$D_{s}^{\ast}(2^{3}S_{1})$ $12.89$ $12.23$ $12.23$
----------------------------- ---------------------------- --------- --------- ---------- ------------------------ ----------------------- ----------------------- ------------------------ ---------------------- ---------------- ---------------------- ----------------------
Numerical Results and Discussions {#Sec:Num}
=================================
As indicated in the last section, the partial widths of electric transition can be estimated with different approximations, hereafter, we use Mode I, Mode II and Mode III to refer the estimations with Eqs.(\[Eq:ModeI\]), (\[Eq:ModeII\]) and (\[Eq:ModeIII\]), respectively and further check the reliability of different approximations. With the wave functions estimated from the relativistic quark model and the formula in above section, we can get the partial widths of the electric transitions, which are listed in Tables \[Tab:S2P\]-\[Tab:D2P\].
In Table \[Tab:S2P\], we present the electric transitions for $P\to S\gamma$ processes, where $P$ and $S$ indicate the $P-$ and $S-$ wave charmed-strange mesons, respectively. In addition, we also listed the theoretical results from other groups [@Radford:2009bs; @Green:2016occ; @Goity:2000dk; @Close:2005se] for comparison. From the table, one can find the estimation from different approximations are almost the same, which indicates that the approximation from Mode I to Mode III are still reliable and long wave length approximation in the considered electric transitions of charmed-strange mesons is reasonable. Our estimation indicates that most of our results are of the same order as those in Refs. [@Radford:2009bs; @Green:2016occ; @Goity:2000dk; @Close:2005se]. In particular, we find the partial widths of $D_s(2^1S_0) \to D_{s1}(1P_1) \gamma$ and $D_s(2^1S_0) \to D_{s1}(1P_1^\prime) \gamma$ from different literatures are very different. Our estimation shows that the partial width of $D_s(2^1S_0) \to D_{s1}(1P_1) \gamma$ is 0.07 keV, which is of same order as those in Ref. [@Radford:2009bs; @Green:2016occ], but the estimation in Ref. [@Goity:2000dk; @Close:2005se] are about two order larger than the present estimation. As for $D_s(2^1S_0) \to D_{s1}(1P_1^\prime) \gamma$, our estimation is of the same order as the one in Ref. [@Close:2005se], but much smaller than those in Ref. [@Goity:2000dk]. The estimations in the present work and in Refs. [@Radford:2009bs; @Green:2016occ; @Close:2005se] are all based on relativistic quark model. But it should be notice that the estimated mass spectroscopy in Refs. [@Radford:2009bs; @Green:2016occ] are similar to the present one, where the masses of $D_{s0}(2317)$ and $D_{s1}(2460)$ were well reproduced. Thus the meson wave functions should be similar and so do the electric transition widths. As for Ref. [@Goity:2000dk], the estimated mass spectroscopy are much different with the present one and the estimated masses of $D_{s0}(2317)$ and $D_{s1}(2460)$ are far above the measured values, then the meson wave functions and electric transition widths are much different. As for Ref. [@Close:2005se], the estimations are based on heavy quark limit, which should be more reliable for bottom mesons.
As for the radiative decay of $3S$ states, we find the that $\Gamma(D_{s}^\ast(3^3S_1) \to D_{s0}^\ast(2^3P_0) \gamma)=(13 \sim 14)$ keV. As for $D_{s}^\ast(3^3S_1)$, it is far above the threshold of $DK$, and it dominantly decay into a charmed meson and a strange meson, and its total width are estimated to be around $100$ MeV [@Song:2015nia], and with such a large width, the branching ratio of $D_{s}^\ast(3^3S_1) \to D_{s0}^\ast(2^3P_0) \gamma$ is of order $10^{-4}$.
In Table \[Tab:P2S\], we present our estimated widths for $P \to S\gamma$. Our estimation indicates that the partial widths for $1P \to 1S \gamma$ vary from several keV to 10 keV, which is consistent with those in the previous literatures [@Radford:2009bs; @Green:2016occ; @Korner:1992pz; @Godfrey:2005ww; @Close:2005se; @Goity:2000dk]. The partial width of $D_{s0}(1^3P_0) \to D_s^\ast \gamma$ is estimated to be around 2 keV. The measured upper limits of $\Gamma_{D_{s0}(1^3P_0)}$ and $B(D_{s0}(1^3P_0) \to D_s^\ast \gamma)$ are $3.5$ MeV and $6\%$, respectively. Thus, the upper limit of the partial width of $D_{s0}(1^3P_0) \to D_s^\ast \gamma$ is 210 keV, which indicates our estimation is safely under the upper limit of the experimental values.
As for $D_{s1}^\prime (2460)$, the widths of $D_s \gamma$ and $D_s^\ast \gamma$ modes are 3.61 and 4.79 keV, respectively, which are both safely under the upper limits of the experimental values. Moreover, from our estimation, we find that the partial width of $D_s^\ast \gamma$ mode is a bit larger than the one of $D_s \gamma$, which is similar to those in Refs. [@Radford:2009bs; @Green:2016occ; @Goity:2000dk; @Close:2005se], but different with the experimental measurements, which are $B(D_{s1}^\prime(2460) \to D_s \gamma)=(18 \pm 4)\%$ and $B(D_{s1}^\prime(2460) \to D_s^\ast \gamma)<8\%$. It should be notice that the $D_{s1}^\prime(2460)$ state has the components with both $S=0$ (corresponding to $^1P_1$ state) and $S=1$ (corresponding to $^3P_1$ state), while in the electric transitions, the spin of the initial and final states should be the same, thus, the electric transitions involves $D_{s1}(nP_1^\prime)$ and $D_{s1}(nP_1)$ states are sensitive to the spin singlet and triplet mixing.
As for $D_{s1}(1P_1)$ state, our estimation indicates that the partial widths of $D_s \gamma$ and $D_s^\ast \gamma$ are 18.85 and 3.02 keV, respectively. The width of $D_{s1}(1P_1)$ is measured to be $(0.92\pm 0.05)$ MeV, then the branching ratios of $D_{s1}(1P_1) \to D_s \gamma $ and $D_s^\ast \gamma$ can be $2.0\%$ and $3.3 \times 10^{-3}$, which should be large enough to be detected. On the experimental side, there may be some experimental hint of $D_{s1}(1P_1) \to D_s^\ast \gamma$ process. As for $D_{s2}(1^3P_2)$, we find the partial width of $D_{s2}(1^3P_2) \to D_s^\ast \gamma$ could reach up to 15.66 keV, which indicates the branching ratio is about $9 \times 10^{-4}$. As for $2P$ states, we find the partial widths of $D_{s1}(2P_1) \to D_{s}(2^1S_0) \gamma$ and $D_{s2}^\ast(2^3P_2) \to D_{s}^\ast(2^3S_1) \gamma$ are more than 10 keV. As shown in Ref. [@Song:2015nia], the total widths of $D_{s1}(2P_1)$ and $D_{s2}(2^3P_2)$ are estimated to be $285.3$ and $86.25$ MeV, respectively. Thus, the branching ratios of $D_{s1}(2P_1) \to D_{s}(2^1S_0) \gamma$ and $D_{s2}^\ast(2^3P_2) \to D_{s}^\ast(2^3S_1) \gamma$ are of order $10^{-5}$ and $10^{-4}$, respectively.
Our estimation for $P \to D \gamma$ and $D\to P \gamma$ process are listed in Table \[Tab:P2D\] and \[Tab:D2P\]. As for $P \to D\gamma$ processes, the largest one is $D_{s2}^\ast (2^3P_2) \to D_{s3}(1^3D_3)$, which are 3.22 keV. As for $D \to P \gamma$ processes, the partial widths of $D_s^\ast (1^3D_1) \to D_{s0}^\ast(1^3P_0) \gamma$, $D_s^\ast (1^3D_3) \to D_{s0}^\ast(1^3P_2) \gamma$ and $D_{s2}(2D_2) \to D_{s1}(2P_1^\prime) \gamma $ processes are greater than 10 keV. These highly excited states are far above the threshold of $DK$ and $D^\ast K$, and they dominantly decay into a charmed meson and a strange meson, their width should be of order 100 MeV. Thus the branching ratios of these radiative decays should be of order of $10^{-4}$.
----------------------------- ---------------------------- -------- ------------------- ---------- -- --
Decay width (keV)
Mode I Mode II Mode III
$D_{s1}(2P_{1}^\prime)$ $D_{s}^{\ast}(1^{3}D_{1})$ $0.11$ $0.11$ $0.11$
$D_{s2}(1D_{2})$ $0.15$ $0.15$ $0.15$
$D_{s2}(1D_{2}^\prime)$ $0.04$ $0.04$ $0.04$
$D_{s1}(2P_{1})$ $D_{s}^{\ast}(1^{3}D_{1})$ $0.12$ $0.12$ $0.12$
$D_{s2}(1D_{2})$ $1.11$ $1.11$ $1.11$
$D_{s2}(1D_{2}^\prime)$ $0.45$ $0.44$ $0.44$
$D_{s2}^{\ast}(2^{3}P_{2})$ $D_{s}^{\ast}(1^{3}D_{1})$ $0.31$ $0.30$ $0.30$
$D_{s2}(1D_{2})$ $0.25$ $0.24$ $0.24$
$D_{s2}(1D_{2}^\prime)$ $0.13$ $0.13$ $0.13$
$D_{s3}(1^{3}D_{3})$ $3.22$ $3.15$ $3.15$
----------------------------- ---------------------------- -------- ------------------- ---------- -- --
: Electric transition width for $P \to D\gamma $ processes, where $P$ and $D$ are $P-$ and $D-$wave charmed-strange mesons, respectively. \[Tab:P2D\]
---------------------------- ----------------------------- --------- ------------------- ---------- -- --
Decay width (keV)
Mode I Mode II Mode III
$D_{s}^{\ast}(1^{3}D_{1})$ $D_{s0}^{\ast}(1^{3}P_{0})$ $21.26$ $20.49$ $20.49$
$D_{s1}(1P_{1}^\prime)$ $4.33$ $4.25$ $4.25$
$D_{s1}(1P_{1})$ $0.87$ $0.86$ $0.86$
$D_{s2}^{\ast}(1^{3}P_{2})$ $0.27$ $0.26$ $0.26$
$D_{s0}^{\ast}(2^{3}P_{0})$ $0.03$ $0.03$ $0.03$
$D_{s2}(1D_{2}^\prime)$ $D_{s1}(1P_{1}^\prime)$ $6.26$ $6.12$ $6.12$
$D_{s1}(1P_{1})$ $6.33$ $6.21$ $6.21$
$D_{s2}^{\ast}(1^{3}P_{2})$ $1.10$ $1.09$ $1.09$
$D_{s2}(1D_{2})$ $D_{s1}(1P_{1}^\prime)$ $8.59$ $8.37$ $8.37$
$D_{s1}(1P_{1})$ $7.10$ $6.95$ $6.95$
$D_{s2}^{\ast}(1^{3}P_{2})$ $1.72$ $1.69$ $1.69$
$D_{s3}(1^{3}D_{3})$ $D_{s2}^{\ast}(1^{3}P_{2})$ $11.77$ $11.56$ $11.56$
$D_{s}^{\ast}(2^{3}D_{1})$ $D_{s0}^{\ast}(1^{3}P_{0})$ $3.96$ $2.33$ $2.37$
$D_{s1}(1P_{1})$ $0.80$ $0.88$ $0.88$
$D_{s1}(1P_{1}^\prime)$ $0.71$ $0.54$ $0.54$
$D_{s2}^{\ast}(1^{3}P_{2})$ $0.32$ $0.23$ $0.23$
$D_{s0}^{\ast}(2^{3}P_{0})$ $32.75$ $30.25$ $30.26$
$D_{s1}(2P_{1}^\prime)$ $4.99$ $4.83$ $4.83$
$D_{s1}(2P_{1})$ $0.52$ $0.52$ $0.52$
$D_{s2}^{\ast}(2^{3}P_{2})$ $0.69$ $0.69$ $0.69$
$D_{s2}(2D_{2}^\prime)$ $D_{s1}(1P_{1}^\prime)$ $2.73$ $2.81$ $2.81$
$D_{s1}(1P_{1})$ $0.54$ $0.19$ $0.21$
$D_{s2}^{\ast}(1^{3}P_{2})$ $0.19$ $0.15$ $0.15$
$D_{s1}(2P_{1})$ $6.80$ $6.47$ $6.47$
$D_{s1}(2P_{1}^\prime)$ $6.56$ $6.40$ $6.40$
$D_{s2}^{\ast}(2^{3}P_{2})$ $0.12$ $0.12$ $0.12$
$D_{s2}(2D_{2})$ $D_{s1}(1P_{1}^\prime)$ $5.37$ $5.61$ $5.61$
$D_{s1}(1P_{1})$ $0.31$ $0.22$ $0.22$
$D_{s2}^{\ast}(1^{3}P_{2})$ $0.48$ $0.28$ $0.28$
$D_{s1}(2P_{1}^\prime)$ $11.93$ $11.37$ $11.37$
$D_{s1}(2P_{1})$ $1.98$ $1.94$ $1.94$
$D_{s2}^{\ast}(2^{3}P_{2})$ $1.31$ $1.29$ $1.29$
$D_{s3}(2^{3}D_{3})$ $D_{s2}^{\ast}(1^{3}P_{2})$ $0.002$ $0.07$ $0.09$
$D_{s2}^{\ast}(2^{3}P_{2})$ $8.52$ $8.34$ $8.34$
---------------------------- ----------------------------- --------- ------------------- ---------- -- --
: The same as Table \[Tab:P2D\] but for $D\to P \gamma$ process.\[Tab:D2P\]
Summary {#Sec:Sum}
=======
The radiative decay is one of important decay modes of charmed strange mesons, especially for the low lying charmed strange mesons. In the present work, we adopt a relativistic constituent quark model to depict the mass spectroscopy of the charmed meson, in which $D_{s0}(2317)$ and $D_{s1}(2460)$ are considered as $1^3P_0$ and $1P_1^\prime$ charmed strange mesons, respectively, while $D_{sJ}(3040)$ is assigned as $D_{s1}(2P_1)$ states.
With the wave function estimated by the relativistic quark model, we evaluate the electric transitions between the charmed strange mesons. By comparing the transition widths obtained with different approximations, we find that the long wave length approximation is reasonable for most cases of the electric transitions between charmed-strange mesons. Our estimation indicates that the partial widths of $D_{s0}(1^3P_0) \to D_s^\ast \gamma$, $D_{s1}(1P_1) \to D_s^\ast \gamma$ and $D_{s1}(1P_1) \to D_s \gamma$ are all safely under the upper limits of the experimental data. As for $D_{s1}(1P_1^\prime) \to D_s \gamma $ and $D_{s1}(1P_1^\prime) \to D_s^\ast \gamma $, our estimation find that the branching ratios of these processes are large enough to be detected, which could be searched in further experiments in Belle II and LHCb. As for $P\to D\gamma$ and $D\to P \gamma $ processes, the width of some channels can reach up to 10 keV, which may be tested by further experimental measurements.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11775050 and 11975075.
Electromagnetic transition operator {#Sec:App}
===================================
By replacing the quark field $\bar{\psi}$ with $\psi^\dagger$, one can use matrix $\alpha$ instead of the $\gamma$ matrix in Eq. (\[Eq:He\]). Then the electromagnetic transition matrix elements for a radiative decay process becomes, $$\begin{aligned}
\mathcal{M}&=& \left \langle f \left| \sum_{j} e_j \alpha_j \cdot \epsilon e^{-i \mathbf{k}\cdot \mathbf{r}_j} \right| i \right\rangle\nonumber\\
\end{aligned}$$ Considering the fact that the involve mesons are composite systems and the relativistic Hamiltonian is, $$\begin{aligned}
\hat{H}=\sum_j \left(\mathbf{\alpha}_j \cdot \mathbf{p}_j +\beta_j m_j \right) +\sum_{i,j} V\left({\mathbf{r}_i-\mathbf{r}_j}\right),\end{aligned}$$ we have the following identity, $$\begin{aligned}
\mathbf{\alpha}_j\equiv i \left[ \hat{H},\mathbf{r}_j\right].\end{aligned}$$ Then, the electromagnetic transition matrix can be expressed as, $$\begin{aligned}
\mathcal{M}&=& i \left\langle f\left| \left[\hat{H},\sum_j e_j \mathbf{r}_j\cdot \mathbf{\epsilon} e^{-i\mathbf{k} \cdot \mathbf{r}_j} \right]\right|i \right\rangle\nonumber\\
&+& i \left\langle f\left| \sum_j e_j \mathbf{r}_j\cdot \mathbf{\epsilon} \mathbf{\alpha}_j \cdot \mathbf{k} e^{-i\mathbf{k} \cdot \mathbf{r}_j} \right|i \right\rangle\nonumber \\
&=& -i(E_i-E_f-\omega_\gamma) \left \langle f\left|g_e\right|i\right \rangle -i\omega_\gamma \left \langle f\left|h_e\right|i\right \rangle, \label{Eq:App-M4}\end{aligned}$$ with $$\begin{aligned}
h_e&=&\sum_j e_j \mathbf{r_j} \cdot \mathbf{\epsilon} (1-\mathbf{\alpha}_j \cdot \hat{\mathbf{k}}) e^{-i\mathbf{k} \cdot \mathbf{r}_j},\nonumber\\ g_e&=&\sum_j e_j \mathbf{r_j} \cdot \mathbf{\epsilon} e^{-i\mathbf{k} \cdot \mathbf{r}_j}. \end{aligned}$$ $E_i$, $E_f$ and $\omega_\gamma$ in Eq. (\[Eq:App-M4\]) are the energies of the initial meson, the final meson and the emitted photon, respectively. Thus, $E_i-E_f-\omega_\gamma \equiv 0$ due to the conservation of energy. Thus, one has, $$\begin{aligned}
\mathcal{M}=-i\omega_\gamma \left \langle f\left|h_e\right|i\right \rangle\end{aligned}$$ Following the procedures used in Refs. [@Brodsky:1968ea; @Li:1997gd], one can get the non-relativistic expansion of $h_e$, which is, $$\begin{aligned}
h_e \simeq \sum_j \left[e_j \mathbf{r}_j \cdot \mathbf{\epsilon} -\frac{e_j}{2m_j} \mathbf{\sigma}_j \cdot \left( \mathbf{\epsilon \times \hat{\mathbf{k}}}\right)\right] e^{-i\mathbf{k} \cdot \mathbf{r}_j},\end{aligned}$$ where the first and the second terms are corresponding to electric and magnetic transitions, respectively.
[99]{}
R. Brandelik [*et al.*]{} \[DASP Collaboration\], Phys. Lett. [**70B**]{} (1977) 132.
A. E. Asratian [*et al.*]{}, Z. Phys. C [**40**]{}, 483 (1988). Y. Kubota [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett. [**72**]{}, 1972 (1994) \[hep-ph/9403325\]. B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett. [**90**]{}, 242001 (2003) \[hep-ex/0304021\].
P. Krokovny [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett. [**91**]{}, 262002 (2003) \[hep-ex/0308019\]. S. Godfrey and N. Isgur, Phys. Rev. D [**32**]{}, 189 (1985). Z. X. Xie, G. Q. Feng and X. H. Guo, Phys. Rev. D [**81**]{}, 036014 (2010).
Y. J. Zhang, H. C. Chiang, P. N. Shen and B. S. Zou, Phys. Rev. D [**74**]{}, 014013 (2006) \[hep-ph/0604271\].
P. Bicudo, Nucl. Phys. A [**748**]{}, 537 (2005). \[hep-ph/0401106\].
A. Faessler, T. Gutsche, V. E. Lyubovitskij and Y. L. Ma, Phys. Rev. D [**76**]{}, 014005 (2007). A. Faessler, T. Gutsche, V. E. Lyubovitskij and Y. L. Ma, Phys. Rev. D [**76**]{}, 114008 (2007).
M. Cleven, H. W. Grießhammer, F. K. Guo, C. Hanhart and U. G. Meißner, Eur. Phys. J. A [**50**]{}, no. 9, 149 (2014). \[arXiv:1405.2242 \[hep-ph\]\].
A. Datta and P. J. O’donnell, Phys. Lett. B [**572**]{} , 164 (2003). C. J. Xiao, D. Y. Chen and Y. L. Ma, Phys. Rev. D [**93**]{}, no. 9, 094011 (2016) \[arXiv:1601.06399 \[hep-ph\]\]. M. E. Bracco, A. Lozea, R. D. Matheus, F. S. Navarra and M. Nielsen, Phys. Lett. B [**624**]{}, 217 (2005). \[hep-ph/0503137\]. M. F. M. Lutz and M. Soyeur, Prog. Part. Nucl. Phys. [**61**]{}, 155 (2008). D. S. Hwang and D. W. Kim, Phys. Lett. B [**601**]{} , 137 (2004). X. Liu, Y. M. Yu, S. M. Zhao and X. Q. Li, Eur. Phys. J. C [**47**]{}, 445 (2006). \[hep-ph/0601017\]. J. Lu, X. L. Chen, W. Z. Deng and S. L. Zhu, Phys. Rev. D [**73**]{}, 054012 (2006). \[hep-ph/0602167\]. J. B. Liu and M. Z. Yang, JHEP [**1407**]{}, 106 (2014). \[arXiv:1307.4636 \[hep-ph\]\].
Z. G. Wang, Phys. Rev. D [**75**]{}, 034013 (2007). \[hep-ph/0612225\].
S. Fajfer and A. P. Brdnik, Phys. Rev. D [**92**]{} , 074047 (2015). \[arXiv:1506.02716 \[hep-ph\]\]. Q. T. Song, D. Y. Chen, X. Liu and T. Matsuki, Phys. Rev. D [**91**]{}, 054031 (2015) \[arXiv:1501.03575 \[hep-ph\]\]. B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett. [**97**]{}, 222001 (2006) \[hep-ex/0607082\]. G. L. Wang, J. M. Zhang and Z. H. Wang, Phys. Lett. B [**681**]{} (2009) 326 \[arXiv:1001.2035 \[hep-ph\]\].
P. Colangelo, F. De Fazio, S. Nicotri and M. Rizzi, Phys. Rev. D [**77**]{} (2008) 014012 \[arXiv:0710.3068 \[hep-ph\]\]. B. Zhang, X. Liu, W. Z. Deng and S. L. Zhu, Eur. Phys. J. C [**50**]{} (2007) 617 \[hep-ph/0609013\].
R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett. [**113**]{}, 162001 (2014) \[arXiv:1407.7574 \[hep-ex\]\]. Q. T. Song, D. Y. Chen, X. Liu and T. Matsuki, Eur. Phys. J. C [**75**]{} (2015) no.1, 30 \[arXiv:1408.0471 \[hep-ph\]\].
Z. G. Wang, Eur. Phys. J. C [**75**]{}, no. 1, 25 (2015) \[arXiv:1408.6465 \[hep-ph\]\]. S. Godfrey and K. Moats, Phys. Rev. D [**90**]{}, no. 11, 117501 (2014) \[arXiv:1409.0874 \[hep-ph\]\]. H. W. Ke, J. H. Zhou and X. Q. Li, Eur. Phys. J. C [**75**]{}, no. 1, 28 (2015) \[arXiv:1411.0376 \[hep-ph\]\].
B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**80**]{}, 092003 (2009) \[arXiv:0908.0806 \[hep-ex\]\]. S. Godfrey and K. Moats, Phys. Rev. D [**93**]{}, no. 3, 034035 (2016) \[arXiv:1510.08305 \[hep-ph\]\].
M. Tanabashi [*et al.*]{} \[Particle Data Group\], Phys. Rev. D [**98**]{} (2018) no.3, 030001.
N. Devlani and A. K. Rai, Phys. Rev. D [**84**]{}, 074030 (2011).
W. J. Deng, H. Liu, L. C. Gui and X. H. Zhong, Phys. Rev. D [**95**]{}, no. 3, 034026 (2017) \[arXiv:1608.00287 \[hep-ph\]\].
S. F. Radford, W. W. Repko and M. J. Saelim, Phys. Rev. D [**80**]{} (2009) 034012 \[arXiv:0903.0551 \[hep-ph\]\]. N. Green, W. W. Repko and S. F. Radford, Nucl. Phys. A [**958**]{}, 71 (2017) \[arXiv:1605.06393 \[hep-ph\]\]. F. E. Close and E. S. Swanson, Phys. Rev. D [**72**]{}, 094004 (2005) \[hep-ph/0505206\]. J. L. Goity and W. Roberts, Phys. Rev. D [**64**]{} (2001) 094007 \[hep-ph/0012314\]. S. Godfrey and R. Kokoski, Phys. Rev. D [**43**]{}, 1679 (1991). S. Godfrey, Phys. Lett. B [**568**]{} (2003) 254 \[hep-ph/0305122\]. J. G. Korner, D. Pirjol and K. Schilcher, Phys. Rev. D [**47**]{}, 3955 (1993) \[hep-ph/9212220\]. S. Godfrey, Phys. Rev. D [**72**]{}, 054029 (2005) \[hep-ph/0508078\]. S. J. Brodsky and J. R. Primack, Annals Phys. [**52**]{}, 315 (1969).
Z. p. Li, H. x. Ye and M. h. Lu, Phys. Rev. C [**56**]{}, 1099 (1997) \[nucl-th/9706010\].
[^1]: Corresponding Author
[^2]: In Ref [@Brandelik:1977fg], these two states named $F$ and $F^\ast$
|
---
abstract: 'The energy spectrum of nucleons in high-density nuclear matter is investigated in the framework of relativistic meson-nucleon many-body theory, employing the $1/N$ expansion method. The coupling of the nucleon with the particle-hole excitations in the medium flattens the spectra in the vicinity of the Fermi surface. The effect grows logarithmically for increasing density and eventually leads to instability of the normal state. The validity of the mean-field theory at high density is criticized.'
author:
- |
Kazuhiro TANAKA[^1]\
\
[*Radiation Laboratory*]{}\
[*The Institute of Physical and Chemical Research (RIKEN)*]{}\
[*Hirosawa 2-1, Wako-shi, Saitama, 351-01 Japan*]{}
date:
title: 'Quasiparticle properties and the dynamics of high-density nuclear matter'
---
3.7ex
One universal character shared by most normal Fermi liquids is the enhancement of the effective mass around the Fermi surface [@ma]. Typical indications observed empirically are the density of single-particle levels around the “Fermi surface” in nuclei, and the logarithmic ($T^{3}\ln T$) correction to the specific heat at low temperature $T$ in normal liquid $^{3}$He. It is now well established that these phenomena due to the energy-dependence of the effective mass reflect the dynamics beyond the mean-field theory: The relevant mechanism is the coupling of the single-particle motion to other excitation modes, e.g., particle-hole (ph) excitations in the case of nuclear matter. As a result, the nucleon effective mass at normal density is close to the free mass $M$ or slightly larger at the Fermi surface while it is about the traditional value ($\sim 0.7 M$) away from the Fermi surface.
The effective mass at the Fermi surface determines the density of states at the Fermi surface. Therefore, the dynamical couplings to the single-particle motion could influence the observables significantly. In this light, the density-dependence, especially the high-density behavior, of the effective mass would be relevant to descriptions of high-density matter, needed to understand, e.g., stellar structure or evolution and high-energy heavy ion reactions.
The investigation of the high-density behavior of the effective mass requires a relativistic framework of Fermi liquids. In this paper we compute the quasiparticle spectra including higher-order many-body correlations employing a relativistic many-body theory: In order to exhibit clearly the points avoiding complicated aspects such as the compositeness of hadrons and the possibilities of new degrees of freedom at high density, we use the simplest relativistic model of nuclear matter, the “${{\sigma}}{{\omega}}$ model” [@se]: $${\cal L} = \overline{\psi} \left( i {{\mbox{$\!\not\!\partial$}}}
- g_{{{\omega}}} {{\mbox{$\!\not\!{{\omega}}$}}} - M + g_{{{\sigma}}} {{\sigma}}\right) \psi
+ \frac{1}{2} \left( \partial_{\mu} {{\sigma}}\right)^{2}
- \frac{1}{2} m_{{{\sigma}}}^{2} {{\sigma}}^{2} - \frac{1}{4}{{\omega}}_{\mu \nu}^{2}
+ \frac{1}{2} m_{{{\omega}}}^{2} {{\omega}}_{\mu}^{2},
\label{eq:la}$$ with ${{\omega}}_{\mu \nu} = \partial_{\mu}{{\omega}}_{\nu} - \partial_{\nu} {{\omega}}_{\mu}$. $\psi$, ${{\sigma}}$ and ${{\omega}}_{\mu}$ are the nucleon, ${{\sigma}}$- and ${{\omega}}$-meson fields.
Since this model is renormalizable, it allows a systematic investigation of the many-body correlations. One convenient scheme for this purpose is the $1/N$ expansion where $N$ is the extended number of nucleon species [@ta3]. The expansion can be obtained by applying the rescaling: $g_{{{\sigma}}} \rightarrow 1/\sqrt{N} g_{{{\sigma}}}$, $g_{{{\omega}}} \rightarrow 1/\sqrt{N} g_{{{\omega}}}$; ${{\sigma}}\rightarrow \sqrt{N} {{\sigma}}$, ${{\omega}}_{\mu} \rightarrow \sqrt{N} {{\omega}}_{\mu}$ in the lagrangian [(\[eq:la\])]{}. The energy density of nuclear matter up to the next-to-leading order is given by ${{\cal E}}= {{\cal E}}_{0} + {{\cal E}}_{\rm e} + {{\cal E}}_{\rm c}$ [@ta3; @ta4]. ${{\cal E}}_{0}$ denotes the Hartree term which is the leading order of $O(N)$, while ${{\cal E}}_{\rm e}$ and ${{\cal E}}_{\rm c}$ denote the exchange and the RPA-type correlation energy terms, both of which are of $O(1)$. The formulae for these terms can be found in ref. [@ta3]. Based on this framework, a successful description of the saturation property, the equation of state, and the Fermi-liquid properties of nuclear matter has been obtained [@ta3; @ta4; @he], improving the Hartree description. The essential ingredients are the many-body correlation effects due to ${{\cal E}}_{\rm c}$.
The quasiparticle energy corresponding to our energy density can be obtained by taking the functional derivative with respect to the Fermi distribution function $n({\mbox{\boldmath {$p$}}})$ following the Landau theory of Fermi liquids [@la1] and its relativistic extension [@ba3]: $$\epsilon ({\mbox{\boldmath {$p$}}}) = \frac{\delta {{\cal E}}}{\delta n({\mbox{\boldmath {$p$}}})}
= \epsilon_{0} ({\mbox{\boldmath {$p$}}}) + \epsilon_{\rm e}({\mbox{\boldmath {$p$}}})
+ \epsilon_{\rm c}({\mbox{\boldmath {$p$}}}),
\label{eq:qp2}$$ where the three terms derive from the corresponding three terms of our energy density. The familiar Hartree term $\epsilon_{0}$, which is of $O(1)$, contains the scalar and the vector mean-field potential contributions. The other contributions of $O(1/N)$ are given by [@ta4] $$\epsilon_{\rm x}({\mbox{\boldmath {$p$}}}) = \frac{{{M^{\ast}}}}{E^{\ast}(p)}
\overline{u}(p) {{\mit \Sigma}}_{\rm x}(p) u(p)
\;\;\; \;\;\; ({\rm x} = {\rm e}, \; {\rm c}),
\label{eq:e1}$$ where $E^{\ast}(p) = \sqrt{{\mbox{\boldmath {$p$}}}^{2} + {M^{\ast}}^{2}}$ and ${{M^{\ast}}}= M - g_{{{\sigma}}} \overline{{{\sigma}}}$ with $\overline{{{\sigma}}}$ the ground-state expectation value of the ${{\sigma}}$-field. $u(p)$ is the Hartree spinor normalized by $\overline{u}u = 1$. ${{\mit \Sigma}}_{\rm x}$ is the nucleon self-energy shown in fig. \[fig:fig2\]; ${\rm x} = {\rm e}$ and ${\rm c}$ correspond to the first (exchange) and the following (RPA) graphs, respectively.[^2]
The counter terms to renormalize the divergences of the self-energies are determined through eq. [(\[eq:qp2\])]{} consistently with the renormalization conditions for the energy density: We renormalize all vertex parts in free space at the scales $q^{2} = 0$ and ${{\mbox{$\!\not\!p$}}} = M$ for the meson and the nucleon legs, respectively. The Landau ghost singularity appearing in the one-loop meson propagators is removed by the method based on the Källén-Lehmann representation proposed by Redmond [@re] and extended to finite density in refs. [@ta1; @ta3; @ta4].
If the nucleon Hartree propagator connecting the external legs in the diagrams of fig. \[fig:fig2\] is separated into the “Feynman part” (F) and the “density-dependent part” (D) [@se], we obtain the conventional separation of ${{\mit \Sigma}}_{{{\rm x}}}$ into ${{\mit \Sigma}}_{{{\rm x}}{{\rm F}}}$ and ${{\mit \Sigma}}_{{{\rm x}}{{\rm D}}}$. For the present purpose, it is convenient to employ a new separation of the self-energy based on a Wick rotation (WR) of the loop integration in ${{\mit \Sigma}}_{{{\rm x}}{{\rm F}}}$ [@ta4; @bl]: $\epsilon_{{{\rm x}}}({\mbox{\boldmath {$p$}}}) = \epsilon_{{{\rm x}}{{\rm W}}}({\mbox{\boldmath {$p$}}})
+ \epsilon_{{{\rm x}}{{\rm P}}}({\mbox{\boldmath {$p$}}})$, where $\epsilon_{{{\rm x}}{{\rm W}}}$ involves the Wick rotated euclidian 4-integral, and $\epsilon_{{{\rm x}}{{\rm P}}}$ contains an on-shell intermediate nucleon line, arising from the nucleon pole due to the WR and the contribution of ${{\mit \Sigma}}_{{{\rm x}}{{\rm D}}}$. $\epsilon_{{{\rm x}}{{\rm P}}}$ can be expressed compactly: $$\epsilon_{{{\rm x}}{{\rm P}}} ({\mbox{\boldmath {$p$}}})= 4 \int^{{p_{{{\rm F}}}}}_{p}
\frac{{{\rm d}}q q^{2}}{(2 \pi)^{2}} \int^{1}_{-1}
{{\rm d}}\! \cos \theta \: f_{{{\rm x}}}({\mbox{\boldmath {$p$}}}, {\mbox{\boldmath {$q$}}}),
\label{eq:po}$$ with $\theta$ the angle between ${\mbox{\boldmath {$p$}}}$ and ${\mbox{\boldmath {$q$}}}$. $f_{{{\rm x}}}({\mbox{\boldmath {$p$}}}, {\mbox{\boldmath {$q$}}})$ are the exchange-type interactions between two quasiparticles with momenta ${\mbox{\boldmath {$p$}}}$, ${\mbox{\boldmath {$q$}}}$. $f_{\rm e}$ involves the noninteracting meson propagators, while $f_{\rm c}$ takes into account the modifications due to the one-loop meson self-energies.
The slope of the quasiparticle spectra at the Fermi surface defines the Fermi velocity: $${v_{{{\rm F}}}}= \left. \frac{\partial \epsilon({\mbox{\boldmath {$p$}}})}{\partial |{\mbox{\boldmath {$p$}}}|}
\right|_{|{\mbox{\boldmath {$p$}}}| = {p_{{{\rm F}}}}}
= \frac{{p_{{{\rm F}}}}}{E^{\ast}({p_{{{\rm F}}}})} + v_{{{\rm F}}\rm e} + v_{{{\rm F}}\rm c},
\label{eq:vf}$$ where the three terms derive from the corresponding terms of eq. [(\[eq:qp2\])]{}. In the NR treatment, ${v_{{{\rm F}}}}\equiv {p_{{{\rm F}}}}/M_{\rm eff}$ with $M_{\rm eff}$ the effective mass at the Fermi surface. If, as we will show below, the enhancement of $M_{\rm eff}$ becomes stronger and stronger with increasing density, $v_{\rm F}$ can eventually become a decreasing function of the density. In particular, $v_{\rm F} \rightarrow 0$ means that the spectra become flat around the Fermi surface.
In fig. \[fig:fig4\] we show ${v_{{{\rm F}}}}$ as a function of ${p_{{{\rm F}}}}$. The parameters are taken from refs. [@ta3; @ta4] and are shown in table \[tab:tabn\]: The dotted line shows the Hartree approximation, i.e., the first term of eq. [(\[eq:vf\])]{} using the “Hartree” parameter set; it approaches the causal limit “1” at high density. In contrast, the solid line, which shows the full result of [(\[eq:vf\])]{} using the parameter set B, crosses ${v_{{{\rm F}}}}= 0$ at the critical ${p_{{{\rm F}}}}= p_{{{\rm F}}}^{{{\rm c}}}$, and ${v_{{{\rm F}}}}$ becomes negative for higher densities. The dashed line, including only one density-dependent bubble instead of the full RPA propagator,[^3] shows a behavior similar to the full case, but $p_{{{\rm F}}}^{{{\rm c}}}$ becomes smaller. We also show the full result of [(\[eq:vf\])]{} using the parameter set A by the dot-dashed line; the comparison with the solid line demonstrates the sensitivity of $p_{{{\rm F}}}^{{{\rm c}}}$ to the parameters (see eq. [(\[eq:d3\])]{} below). In fig. \[fig:fig5\] we show the quasiparticle energy of eq. [(\[eq:qp2\])]{} for the case of the solid line of fig. \[fig:fig4\]. For high densities, the spectra show the “anomaly” around the Fermi surface, i.e., they are flat or decreasing due to the inclusion of the higher-order self-energy terms.[^4] Thus, the effect grows with increasing density.
From fig. \[fig:fig5\](b), we see that this effect is due to $\epsilon_{{{\rm c}}{{\rm P}}}$, which is a decreasing function around the Fermi surface. (This is seen already at normal density, but for higher densities this effect becomes more pronounced and shows up in the total results.) The decrease of $\epsilon_{{{\rm c}}{{\rm P}}}$ around the Fermi surface could be expected from eq. [(\[eq:po\])]{} if the result of the angular average of $f_{{{\rm x}}} ({\mbox{\boldmath {$p$}}}, {\mbox{\boldmath {$q$}}})$ assumes a positive value (see also ref. [@bl] for the NR case).
Now can we understand the origin of the growth of the anomaly due to $\epsilon_{{{\rm c}}{{\rm P}}}$? For this purpose it will be useful to examine the leading high-density behavior of ${v_{{{\rm F}}}}$. We set for ${p_{{{\rm F}}}}\rightarrow \infty$ ${v_{{{\rm F}}}}\rightarrow 1 + \delta v_{{{\rm F}}{{\rm e}}}
+ \delta v_{{{\rm F}}{{\rm c}}} \equiv 1 + \delta v_{{{\rm F}}}$, where “1” is due to the first (Hartree) term of eq. [(\[eq:vf\])]{} while $\delta v_{{{\rm F}}{{\rm x}}}$ (${{\rm x}}= {{\rm e}}$, ${{\rm c}}$) denote the asymptotic limits of the following two ($O(1/N)$) terms $v_{{{\rm F}}{{\rm e}}}$, $v_{{{\rm F}}{{\rm c}}}$: $$\delta v_{{{\rm F}}{{\rm x}}} \approx
- \frac{p_{{{\rm F}}}^{2}}{\pi^{2}}
\int_{-1}^{1} {{\rm d}}\! \cos \theta \: \left.
f_{{{\rm x}}} ({\mbox{\boldmath {$p$}}}, {\mbox{\boldmath {$q$}}})
\right|_{p = q = {p_{{{\rm F}}}}}.
\label{eq:vfi}$$ The r.h.s. is the result obtained by substituting $\epsilon_{{{\rm x}}{{\rm P}}}$ of eq. [(\[eq:po\])]{} into eq. [(\[eq:vf\])]{}. “$\approx$” means that the contribution due to $\epsilon_{{{\rm x}}{{\rm W}}}$ is neglected because of the above discussion of fig. \[fig:fig5\]. (In fact, it can be shown analytically that $\epsilon_{{{\rm x}}{{\rm W}}}$ does not alter our conclusions [@ta5].)
It is convenient to separate the integrand of eq. [(\[eq:vfi\])]{} according to the types of the possible meson modes in the medium [@ta4]: $$\left. f_{{{\rm x}}}({\mbox{\boldmath {$p$}}}, {\mbox{\boldmath {$q$}}}) \right|_{p = q = {p_{{{\rm F}}}}}
= - \frac{1}{8 p_{{{\rm F}}}^{2}}
\sum_{{{\rm I}}} A_{{{\rm x}}{{\rm I}}}(\cos \theta)
h_{{{\rm I}}}(\cos \theta),
\label{eq:fs}$$ where ${{\rm I}}= {{\sigma}}, {{\rm M}}, {{\rm L}}$ and ${{\rm T}}$, corresponding to the ${{\sigma}}$-, mixed, longitudinal and transverse contributions. $h_{{{\rm I}}}$ is a dimensionless function accounting for the scalar and the vector vertex structures. $A_{{{\rm e}}{{\rm I}}}$ are the exchange contributions due to the noninteracting ${{\sigma}}$- and ${{\omega}}$-meson propagators while $A_{{{\rm c}}{{\rm I}}}$ give the modifications of $A_{{{\rm e}}{{\rm I}}}$ due to the RPA correlations.
To extract the leading asymptotic behavior of [(\[eq:fs\])]{}, we set ${{M^{\ast}}}= 0$; because there is no singularity for ${{M^{\ast}}}= 0$ and from the dimension counting, any contributions omitted here will be suppressed for ${p_{{{\rm F}}}}\rightarrow \infty$ compared to the retained ones. In this limit it can be easily seen that the contributions due to the M-mode are suppressed compared to the others.
In this case, the relevant contributions (${{\rm I}}= {{\sigma}}$, ${{\rm L}}$ and ${{\rm T}}$) have the form: $$a^{n}_{{{\rm I}}} = \frac{g_{{{\rm I}}}^{2}}{2}
\frac{1}{t + \kappa_{{{\rm I}}}^{2}}
\left(\frac{\hat{{{\mit \Pi}}}_{{{\rm I}}}(t)}{t + \kappa_{{{\rm I}}}^{2}}
\right)^{n},
\label{eq:yu}$$ where $t = 1- \cos \theta$, $g_{{{\rm L}}} = g_{{{\rm T}}} \equiv g_{{{\omega}}}$, $m_{{{\rm L}}}= m_{{{\rm T}}} \equiv m_{{{\omega}}}$, $\kappa_{{{\rm I}}}^{2} = m_{{{\rm I}}}^{2}/2 p_{{{\rm F}}}^{2}$, and $\hat{{{\mit \Pi}}}_{{{\rm I}}}(t) = \left. {{\mit \Pi}}_{{{\rm I}}}(p-q)
\right|_{p = q = {p_{{{\rm F}}}}}/2 p_{{{\rm F}}}^{2}$ with ${{\mit \Pi}}_{{{\rm I}}}$ the 1-loop self-energies for those modes.[^5] $a_{{{\rm I}}}^{n}$ give the contributions involving $n$ bubbles for the ${{\rm I}}$-mode, such that $A_{{{\rm e}}{{\rm I}}}= a_{{{\rm I}}}^{0}$ and $A_{{{\rm c}}{{\rm I}}} = \sum_{n \ge 1} (-1)^{n - 1} a_{{{\rm I}}}^{n}$.
If the integrand of eq. [(\[eq:vfi\])]{} given by eqs. [(\[eq:fs\])]{}, [(\[eq:yu\])]{} were finite everywhere in the integration region, the result would coincide with the naive dimension counting which leads to $\delta v_{{{\rm F}}{{\rm x}}} \rightarrow {\rm const}$ as $p_{\rm F} \rightarrow \infty$. For ${p_{{{\rm F}}}}\rightarrow \infty$, however, $\kappa_{{{\rm I}}}^{2} \rightarrow 0$ and therefore the integral diverges at $\theta = 0$. This divergence implies that $\delta v_{{{\rm F}}{{\rm x}}}$ could contain positive powers or logarithms of ${p_{{{\rm F}}}}$. (It can be shown [@ta5] that there is no other singularity.) $\kappa_{{{\rm I}}}^{2} \rightarrow 0$ corresponds formally to zero meson masses; therefore, these singularities for $\theta \simeq 0$, which now prove to give the most dominant contributions for ${p_{{{\rm F}}}}\rightarrow \infty$, correspond to the infrared singularity in the massless limit.
By substituting the analytic formulae of $h_{{{\rm I}}}$ [@ta4] and ${{\mit \Pi}}_{{{\rm I}}}$ [@ta1; @ku] into eqs. [(\[eq:vfi\])]{}-[(\[eq:yu\])]{} and by extracting the singularities for $\theta \simeq 0$, we can obtain $\delta v_{{{\rm F}}{{\rm x}}}$ up to logarithmic accuracy. For ${{\rm x}}= {{\rm e}}$, we obtain $$\begin{aligned}
\delta v_{{{\rm F}}{{\rm e}}} &=& \left(g_{{{\sigma}}}^{2}\times O(1)\right)
+ \left(\frac{1}{2}\left(\frac{g_{{{\omega}}}}{2 \pi}\right)^{2}
\left\{\ln\frac{p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}} + O(1) \right\}\right)
\nonumber \\
&+& \left(- \frac{1}{2} \left(\frac{g_{{{\omega}}}}{2 \pi}\right)^{2}
\left\{\ln\frac{p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}} + O(1) \right\}\right).
\label{eq:d1}\end{aligned}$$ The three terms show the contributions due to the relevant three modes ${{\sigma}}$, ${{\rm L}}$ and ${{\rm T}}$, respectively. Though both the contributions due to the L- and T-modes grow logarithmically, reflecting the logarithmic infrared divergence at $\theta \rightarrow 0$, they cancel out, leading to $\delta v_{{{\rm F}}{{\rm e}}} = {\rm const}$. The contribution to $\delta v_{{{\rm F}}{{\rm c}}}$ due to one bubble insertion, i.e., due to the $n = 1$ term of eq. [(\[eq:yu\])]{}, can be obtained similarly: $$\begin{aligned}
\delta v_{{{\rm F}}1 {\rm b}} &=&
\left(g_{{{\sigma}}}^{4} \times O(\ln p_{{{\rm F}}}^{2})\right)
+ \left(
- \left( \frac{g_{{{\omega}}}}{2 \pi}\right)^{4}
\left\{ \frac{4 p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}} - \frac{1}{3}
\ln \frac{p_{{{\rm F}}}^{2}}{M^{2}}
\ln\frac{p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}}+ O(\ln p_{{{\rm F}}}^{2})
\right\}\right)
\nonumber \\
&+& \left(- \left(\frac{g_{{{\omega}}}}{2 \pi}\right)^{4}
\left\{ \frac{1}{3} \ln \frac{p_{{{\rm F}}}^{2}}{M^{2}}
\ln \frac{p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}} + O(\ln p_{{{\rm F}}}^{2})
\right\}
\right).
\label{eq:d2}\end{aligned}$$ The coupling to the ph excitations makes the infrared divergences stronger;[^6] the most dominant contribution is due to the linearly divergent integral due to the L-mode, and the result grows like $p_{{{\rm F}}}^{2}$. However, it can be easily seen that the insertion of more bubbles causes stronger infrared divergences, leading to terms of higher powers in ${p_{{{\rm F}}}}$.
This situation is reminiscent of the high-density NR electron gas [@ge]. This suggests that we have to sum up all the bubble graphs, forming the complete RPA series. Therefore, [*the infrared structure of $v_{{{\rm F}}}$ due to the coupling of the ph excitations naturally leads to our $1/N$ expansion scheme.*]{} By summing up all the bubbles, we obtain for the total $\delta v_{{{\rm F}}}$: $$\begin{aligned}
\delta v_{{{\rm F}}} &=& \left(g_{{{\sigma}}}(p_{{{\rm F}}})^{2}\times O(1)\right)
+ \left(g_{{{\omega}}}(p_{{{\rm F}}})^{2} \times O(1) \right)
\nonumber \\
&+& \left(- \frac{1}{2} \left(\frac{g_{{{\omega}}}(p_{{{\rm F}}})}{2 \pi}\right)^{2}
\left\{\ln\frac{p_{{{\rm F}}}^{2}}{m_{{{\omega}}}^{2}} + O(1) \right\}\right).
\label{eq:d3}\end{aligned}$$ The resummation of the ring graphs renders the infrared divergence mild, leaving only the logarithmically growing term due to the ${{\rm T}}$-mode (compare the dashed and the solid curves of fig. \[fig:fig4\]). Thus the decrease of $v_{{{\rm F}}}$, which led to the inversion of the spectrum of fig. \[fig:fig5\], is due to the logarithmic correction [(\[eq:d3\])]{} due to the self-energies beyond the mean-field theory.
The resummation also replaces the original coupling constants $g_{{{\rm I}}}$ renormalized at the meson momenta $q^{2} = 0$ by the running coupling constants $g_{{{\rm I}}}({p_{{{\rm F}}}})$ at the scale ${p_{{{\rm F}}}}$ [@ta5]: Before applying Redmond’s method to avoid the Landau ghost [@re], they are related by the one-loop formulae: for ${p_{{{\rm F}}}}\gg M$, $g_{{{\omega}}}({p_{{{\rm F}}}})^{2} \simeq g_{{{\omega}}}^{2} \left/
\left(1 - \frac{g_{{{\omega}}}^{2}}
{6 \pi^{2}} \ln\frac{p_{{{\rm F}}}^{2}}{M^{2}} \right) \right.,$ and similarly for $g_{{{\sigma}}}({p_{{{\rm F}}}})$. $g_{{{\rm I}}}({p_{{{\rm F}}}})$ of eq. [(\[eq:d3\])]{} should be understood as the result of Redmond’s method; they grow for increasing ${p_{{{\rm F}}}}$ toward finite values for the bare coupling constants [@re]. Thus the growth of the last term of eq. [(\[eq:d3\])]{} is enhanced by the growth of $g_{{{\omega}}}(p_{{{\rm F}}})$.
Our result can be summarized by the behavior of the screening mass in the medium. The infrared behavior of the ${{\rm I}}$-mode contribution is governed by the screening mass (squared) $m_{{{\rm sc}}{{\rm I}}}^{2}$, which is obtained from the self-energy ${{\mit \Pi}}_{{{\rm I}}}(q)$ by letting $q^{0} \rightarrow 0$ first followed by ${\mbox{\boldmath {$q$}}}\rightarrow 0$ (see eqs. [(\[eq:vfi\])]{}-[(\[eq:yu\])]{}). The Lorentz invariance guarantees $m_{{{\rm sc}}{{\rm L}}}^{2} = m_{{{\rm sc}}{{\rm T}}}^{2}$, which would lead to the cancellation between the ${{\rm L}}$- and ${{\rm T}}$-mode contributions (see eq. [(\[eq:d1\])]{}). However, the coupling of the ph excitations breaks the invariance: The ${{\rm L}}$-mode acquires a nonzero Debye screening mass, which is $m_{{{\rm sc}}{{\rm L}}}^{2} = 2 g_{{{\omega}}}^{2} p_{{{\rm F}}}^{2}/\pi^{2}$ for ${p_{{{\rm F}}}}\rightarrow \infty$, but $m_{{{\rm sc}}{{\rm T}}}^{2} = 0$ due to baryon current conservation [@be]. As a result, the cancellation between the two modes becomes incomplete, leaving the result of eq. [(\[eq:d3\])]{}.
Now we discuss physical interpretations of our results.
Above the critical $p_{{{\rm F}}}^{{{\rm c}}}$ where $v_{\rm F} = 0$, the normal state is unstable. In our calculation of the ground-state energy density and the quasiparticle energy we assumed the normal state [@la1]. If $v_{{{\rm F}}} < 0$, however, this assumption is not valid. This suggests a stability condition: ${v_{{{\rm F}}}}> 0$ for a normal Fermi liquid, in addition to the wellknown conditions for the dimensionless Landau-Migdal parameters [@la1]. If the ground state is not normal, one should re-compute the energy density self-consistently allowing the possibility of new configurations like the so-called “Fermi-gap” state: If one literally accepts the results of fig. \[fig:fig5\], this indicates a 1st order phase transition at ${p_{{{\rm F}}}}= p_{{{\rm F}}}^{{{\rm c}}}$ from the normal state to the Fermi-gap state.[^7] In contrast to the Fermi sphere for the normal state, the nucleons occupy the momentum states within the inner sphere and the outer shell for the Fermi-gap state. In our model, $p_{{{\rm F}}}^{{{\rm c}}} = 4.07$fm$^{-1}$ which would imply the critical density for neutron matter $\varrho^{{{\rm c}}} {\stackrel{\displaystyle{>}}{\sim}}15{\varrho}_{0}$.
We also note another possible scenario: Near the critical point $v_{\rm F} = 0$, the terms of higher order than we considered here, which would give contributions $\sim ( g_{{{\omega}}}(p_{{{\rm F}}})^{2} \ln p_{{{\rm F}}}^{2} )^{n}$ ($n \ge 2$), also could be important. It would be a very hard job to compute these contributions. Intuitively, one could regard the result of eq. [(\[eq:d3\])]{} as the leading approximation of ${v_{{{\rm F}}}}\approx 1/
\left(1 + \left( g_{{{\omega}}}({p_{{{\rm F}}}})/2 \pi\right)^{2}/2
\times \ln \left(p_{{{\rm F}}}^{2}/m_{{{\omega}}}^{2}\right) \right)$. In this case ${v_{{{\rm F}}}}= 0$ would not occur for any finite ${p_{{{\rm F}}}}$; ${v_{{{\rm F}}}}\rightarrow 0$ asymptotically.
In connection with these scenarios, we stress that the (formal) high-density limit based on the lagrangian [(\[eq:la\])]{} does not correspond to the mean-field theory, contrary to the usual claim [@se]: For the first scenario this is obvious; the second one implies an infinite density of states at the Fermi surface for ${p_{{{\rm F}}}}\rightarrow \infty$, which never be reached in the mean-field theory. The usual claim tacitly assumes the absence of infrared singularities, which reflect large long-wavelength fluctuations of the fields, from the higher order corrections. This assumption leads to the validity of the naive dimension counting, and therefore to the dominance of the mean-field contributions because they contain the largest number of independent fermion loops in each order of the interaction. As discussed above, the effects which lead to the modification of this naive argument are due to the strong infrared behavior induced in the high-density medium.
This conclusion for the high-density behavior seems to be inevitable for relativistic descriptions: The relevant infrared behavior is due to the ph excitation in the vector channel (see eqs. [(\[eq:d1\])]{}-[(\[eq:d3\])]{}).[^8] However, the vector boson degrees of freedom are indispensable to keep matter from collapsing at high density by supplying the repulsion. We also note that the conclusion might be valid even if the compositeness of hadrons were taken into account: Even though the vertex form factors were introduced, they could not influence the results dominated by the contribution from the infrared region ($q^{2} = 0$).
Though we have shown the possibility of the Fermi-gap state, the transition density $\varrho^{{{\rm c}}}$ appears to be above the critical densities to more familiar new phases like quark matter phase, pion condensed phase, etc. However, our result was obtained just within a simple model, and $\varrho^{{{\rm c}}}$ is very sensitive even to small corrections, which might be due to higher orders in $1/N$ or other hadronic degrees of freedom like $\pi$, $\rho$, etc. After the inclusion of these effects, the competition or coexistence of the Fermi-gap state with more familiar new phases would be an interesting aspect of high-density hadronic matter.
In conclusion, we have investigated dynamical effects in high-density nuclear matter due to the higher order many-body correlations. The anomaly in the fermion spectra, corresponding to the famous effective mass enhancement in NR Fermi liquids, grows with increasing density. The effect is due to the strong infrared behavior caused by the coupling of the ph excitation to the single-particle motion. The picture of high-density matter emerged from our investigation is much more complicated than the mean-field description. Though a more realistic treatment would be required to fix the relevance to phenomenology, the universal character of this phenomenon already poses various interesting aspects for high-density matter, like the possibility of the Fermi-gap state, the thermodynamic properties and transport phenomena in this phase, and the extension to gauge theories. These points will be reported in future publications [@ta5].
The author expresses his thank to Dr. W. Bentz for helpful discussions. He is also grateful to Prof. K. Yazaki and the members of nuclear theory group in RIKEN for valuable comments. The work was performed under the auspices of Special Researchers’ Basic Science Program of RIKEN.
[99]{}
C. Mahaux, P. F. Bortignon, R. A. Broglia and C. H. Dasso, Phys. Reports [**120**]{} (1985) 1, and references therein.
B. D. Serot and J. D. Walecka, Adv. in Nucl. Phys. [**16**]{}, ed. J. W. Negele and E. Vogt (Plenum, 1986) p. 1.
K. Tanaka and W. Bentz, Nucl. Phys. [**A540**]{} (1992) 383.
K. Tanaka, W. Bentz and A. Arima, Nucl. Phys. [**A555**]{} (1993) 151.
P. J. Redmond, [ Phys. Rev. ]{} [**112**]{} (1958) 1404;\
N. N. Bogoliubov, A. A. Logunov and D. V. Shirkov, Sov. Phys. JETP (Engl. Trasl.) [**37**]{} (1960) 574.
K. Tanaka, W. Bentz, A. Arima and F. Beck, [ Nucl. Phys. ]{} [**A528**]{} (1991) 676.
W. Bentz, L. G. Liu and A. Arima, Ann. of Phys. (N.Y.) [**188**]{} (1988) 61.
G. Hejc, H. Baier and W. Bentz, Phys. Lett. B, in press.
L. D. Landau, Sov. Phys. JETP [**3**]{} (1956) 920; [**5**]{} (1957) 101; [**8**]{} (1959) 70;\
A. B. Migdal, Theory of finite Fermi systems and applications to atomic nuclei (Wiley, New York, 1967).
G. Baym and S. A. Chin, Nucl. Phys. [**A262**]{} (1976) 527.
J. P. Blaizot and B. L. Friman, Nucl. Phys. [**A372**]{} (1981) 69.
H. Kurasawa and T. Suzuki, Nucl. Phys. [**A445**]{} (1985) 685.
M. Gell-Mann and K. A. Brueckner, Phys. Rev. [**106**]{} (1957) 364.
C. J. Horowitz and B. D. Serot, [ Phys. Lett. B ]{} [**109**]{} (1982) 341.
K. Tanaka, in preparation.
Figure captions {#figure-captions .unnumbered}
===============
1. [Feynman graphs of the nucleon self-energy of $O(1/N)$. The dashed lines in the figure denote the noninteracting ${{\sigma}}$- and ${{\omega}}$-meson propagators; the solid lines denote the nucleon Hartree propagators on the background meson fields and the Fermi sea.]{} \[fig:fig2\]
2. [The Fermi velocity as a function of the Fermi momentum. For explanation of the curves, see text.]{} \[fig:fig4\]
3. [The real parts of the quasiparticle energy (eq. [(\[eq:qp2\])]{}) are shown as functions of momentum by the solid lines. (a), (b) and (c) are for cases of $p_{{{\rm F}}}=1.30$, $4.07$ and $4.50{\rm fm}^{-1}$, which respectively corresponds to ${v_{{{\rm F}}}}> 0$, $= 0$ and $<0$. The dot-dashed line shows the Hartree contribution contained in the full result. The insertion in (b) shows the $O(1/N)$-contributions as functions of momentum. The dot-dashed line shows $\epsilon_{{{\rm e}}}$ of eq. [(\[eq:qp2\])]{}, while the solid line shows $\epsilon_{{{\rm c}}}$. The dotted and dashed lines are the separated contributions $\epsilon_{{{\rm c}}{{\rm W}}}$ and $\epsilon_{{{\rm c}}{{\rm P}}}$ contained in the solid line.]{} \[fig:fig5\]
Table captions {#table-captions .unnumbered}
==============
1. [The parameters used for numerical computation.]{} \[tab:tabn\]
Table 1 {#table-1 .unnumbered}
=======
$\displaystyle{g_{\sigma}^{2}}/4 \pi$ $\displaystyle{ g_{\omega}^{2}}/4 \pi$ $m_{\sigma}$ [[\[MeV\]]{}]{} $m_{\omega}$ [[\[MeV\]]{}]{}
--------- --------------------------------------- ---------------------------------------- ------------------------------ ------------------------------
Hartree 6.23 8.18 550 783
$1/N$ A 2.24 3.65 550 783
B 2.60 4.22 650 783
[^1]: Special Researcher, Basic Science Program
[^2]: For convenience of the following discussion, we employ a slightly different grouping of the next-to-leading terms compared to the preceding works [@ta3; @ta4].
[^3]: The Feynman bubbles are summed up in this result.
[^4]: The latter case with the negative ${v_{{{\rm F}}}}$ does not correspond to the normal state and therefore, strictly speaking, the Landau theory is not applicable. This point will be discussed later.
[^5]: In this work ${{\mit \Pi}}_{{{\rm L}}}$ denotes the minus of “${{\mit \Pi}}_{{{\rm L}}}$” in refs. [@ta3; @ta4].
[^6]: “h” includes the negative-energy states.
[^7]: The possibility of the Fermi-gap state for high-density nuclear matter was pointed out in ref. [@ho] in the Hartree-Fock approximation of the lagrangian [(\[eq:la\])]{}. However, as stressed above, the mechanism in our case is beyond the Hartree-Fock approximation (recall $\delta v_{{{\rm F}}{{\rm e}}} = {\rm const}$).
[^8]: In view of this, the extension of our approach to gauge theories would be appealing, and is under investigation for cases of cold QED and QCD plasma with finite fermion density [@ta5].
|
---
author:
- |
Roshan Foadi, Shrihari Gopalakrishna, Carl Schmidt\
Department of Physics and Astronomy, Michigan State University\
East Lansing, MI 48824, USA\
E-mail: , ,
title: Effects of Fermion Localization in Higgsless Theories and Electroweak Constraints
---
Introduction {#sec:Intro}
============
In the Standard Model (SM), the Higgs sector is responsible for electroweak symmetry breaking. The exchange of a virtual Higgs boson perturbatively unitizes the longitudinal gauge boson scattering amplitude. Without a physical Higgs boson, the theory would break down around the TeV scale. Higgsless theories have been proposed [@Csaki:2003dt], as alternatives to the SM, in which electroweak symmetry breaking is due to boundary conditions on gauge fields that propagate in five dimensions - the usual Minkowskian four dimensions plus an additional fifth spatial dimension. As is the usual practice, we refer to the extra-dimensional interval as the “bulk”, and its four-dimensional endpoints as “branes”. In Higgsless theories, even though a physical scalar Higgs boson is not present in the theory, it has been shown[@SekharChivukula:2001hz] that the onset of unitarily violation can be delayed due to new contributions from the Kaluza-Klein (KK) excitations of the gauge bosons. In our previous work [@FGS] we used deconstruction[@Arkani-Hamed:2001ca] to obtain a Higgsless theory-space model with a $U(1)\times [SU(2)]^N\times SU(2)_{N+1}$ gauge structure. We found that perturbative unitarily violation could be delayed satisfactorily if the heavy vector boson states come in below about the TeV scale. The continuum limit of this theory-space model was a five-dimensional $SU(2)$ gauge theory with boundary conditions that break the theory to $U(1)$ on one of the branes and with gauge kinetic terms localized on both branes.
The issue of whether Higgsless theories are compatible with precision electroweak constraints is being actively investigated. In ref. [@Barbieri:2003pr] it was shown that Higgsless theories have trouble satisfying precision electroweak constraints, even if brane-localized gauge kinetic terms are included. In our previous work [@FGS], we showed that in our model, with standard model fermions confined to the branes, the contributions to electroweak observables could be described in terms of the oblique $S,~T~{\rm and}~U$ parameters [@PT]. We found that owing to an approximate custodial symmetry, $T$ (and $U$) was compatible with data, but $S$ was in violation if the KK states had masses low enough to satisfy perturbative unitarity. Possibilities for reducing $S$ in Higgsless theories have been found [@Csaki:2003zu], but only at the expense of producing a negative value of $T$. In Ref. [@Chivukula:2004pk] it is claimed that it is not possible to set both the $S$ and $T$ parameters simultaneously to zero, even if the bulk gauge coupling is made position-dependent.
It is important to note that all of these conclusions about electroweak constraints apply specifically to Higgsless theories with light fermions bound to the branes. In this article we shall explore how these conclusions change when the light fermions are allowed to have some extension into the bulk[^1]. We shall use the continuum theory from Ref. [@FGS] as our model, although the basic results should be applicable to any Higgsless theory. In Section \[sec:model1\] we begin by describing the gauge sector, along with a recapitulation of the results from Ref. [@FGS] with brane-localized fermions. We then extend this theory to incorporate fermions with some finite extension into the bulk. In Section \[sec:constraints\] we show that in this Higgsless model, which contains bulk fermions as well as fermion brane kinetic terms, it will be possible to make all of the $S,~T~{\rm and}~U$ parameters small enough to agree with the data. This will be the main result of this article. Finally, in Section \[sec:conclusions\], we will offer our conclusions and comment on some remaining issues to be tackled.
Higgsless Theory with Fermions {#sec:model1}
==============================
Gauge Sector {#subsec:gauge}
------------
As our toy model, we will consider the continuum limit of the theory of ref. [@FGS], which is arguably one of the simplest models of Higgsless electroweak symmetry breaking. This model is an $SU(2)$ gauge theory, defined on a fifth-dimensional line segment, $0\leq y\leq\pi R$, where the boundary conditions break the gauge symmetry down to $U(1)$ at one end of the interval. The five dimensional action is[^2] $$\begin{aligned}
{\cal S}&=& \int_0^{\pi R}dy\int d^4x
\left[
-{1\over4(\pi R)\hat{g}_5^2}W^{a\,MN}W^a_{MN}
-\delta(y){1\over4g^2}W^{a\,\mu\nu}W^a_{\mu\nu}\right.\nonumber\\
&&\qquad\qquad\qquad\quad\left.-\delta(\pi R-y)
{1\over4g^{\prime2}}W^{3\,\mu\nu}W^3_{\mu\nu}\right]\ ,
\label{eq:5daction}\end{aligned}$$ where, in this equation, the indices $M,N$ run over the 5 dimensions, and we impose the Dirichlet Boundary condition, $W^a_\mu=0$, at $y=\pi R$ for $a\ne3$. The boundary kinetic energy term at $y=0$ is defined by interpreting the $\delta$-function as $\delta(y-\epsilon)$ with $\epsilon\rightarrow0^+$ and the fields having Neumann boundary conditions, $dW^a_\mu/dy=0$, at $y=0$. The $\delta$-function and the field $W^3_\mu$ at $y=\pi R$ should be interpreted similarly. Note that in the limit of small $g$ and $g^\prime$ the theory looks like an $SU(2)$ gauge theory and a $U(1)$ gauge theory, living at the left and right ends of the fifth-dimensional interval, respectively. It is the bulk fields which connect the $SU(2)$ and the $U(1)$ theories, and transmit the breaking of the gauge groups down to a single $U(1)_{EM}$.
The five-dimensional gauge fields can be expanded in a tower of four-dimensional Kaluza-Klein (KK) states: $$\begin{aligned}
W^{\pm\mu}(x,y)&=& \sum_{n=0}^\infty f_n(y)W_n^{\pm\mu}(x)\nonumber\\
W^{3\mu}(x,y)&=& eA^\mu(x)+\sum_{n=0}^\infty g_n(y)Z_n^{\mu}(x)\ ,
\label{eq:gaugeKK}\end{aligned}$$ where $W_n^{\pm\mu}$ has mass $m_{W_n}$, and $Z_n^{\mu}$ has mass $m_{Z_n}$. The lowest states of the tower, $W_0^{\pm\mu}$ and $Z_0^\mu$, are identified as the standard model $W^\pm$ and $Z$ bosons, respectively. Solving for the masses perturbatively in $\lambda\equiv g/\hat{g}_5$ and $\lambda'\equiv g'/\hat{g}_5$ we obtain $$\begin{aligned}
m_W^2\equiv m_{W_0}^2 & = & {\lambda^2\over (\pi R)^2}\left(1-{\lambda^2\over 3}
+{\cal O}(\lambda^4)\right) \nonumber \\
m_Z^2\equiv m_{Z_0}^2 & = & {\lambda^2+\lambda'^2\over (\pi R)^2}\left(1-{\lambda^2+\lambda'^2\over 3}
+{\lambda^2\lambda'^2\over\lambda^2+\lambda'^2}+{\cal O}(\lambda^4)\right)
\label{eq:smmass}\end{aligned}$$ for the standard model gauge bosons, and $$\begin{aligned}
m_{W_n}^2&\approx&m_{Z_n}^2 \ = \ \left({n\over R}\right)^2\left(1
+{\cal O}(\lambda^2)\right)
\label{eq:gaugemass}\end{aligned}$$ for the heavy gauge bosons.
Fermion Sector Model I: Brane-localized Fermions {#subsec:fermion1}
------------------------------------------------
For clarity we first consider the fermion sector of ref. [@FGS], where the fermions are restricted to the branes. The continuum limit of the fermion action can be written $$\begin{aligned}
{\cal S}^{(I)}&=& \int_0^{\pi R}dy\int d^4x
\left[
\delta(y)i\bar{\psi}_L\sla{D}\psi_L+\delta(\pi R-y)i\bar{\psi}_R\sla{D}\psi_R
\right]\ ,
\label{eq:fermionaction1}\end{aligned}$$ where the covariant derivatives on the left- and right-handed fields are $$\begin{aligned}
\sla{D}\psi_L&=& \left(\sla{\partial}-iT^a\sla{W}^a(y)-iY_L\sla{W}^3(\pi R)\right)\psi_L\nonumber\\
\sla{D}\psi_R&=& \left(\sla{\partial}-iY_R\sla{W}^3(y)\right)\psi_R\ .
\label{eq:derivs1}\end{aligned}$$ Note that the left-handed field $\psi_L$ lives at $y=0$ but couples also to the gauge field $W^3$ at $y=\pi R$. This non-locality may seem unnatural from the standpoint of an extra-dimensional theory; however, it is perfectly well-defined if we consider this from the standpoint of a continuum theory-space model. In the theory-space interpretation $W^a(0)$ and $W^3(\pi R)$ are just the gauge fields for independent $SU(2)$ and $U(1)$ gauge groups, and $y$ is a (continuous) label for the independent gauge groups.
Unfortunately, this mode of incorporating fermions has some unpleasant features. In order to give mass to the fermions requires introducing a nonlocal mass term involving a Wilson line running between the two branes. But the most damaging feature of this fermion action is that it produces electroweak radiative corrections that are too large, invalidating the theory. Therefore, we now consider an alternative fermion action.
Fermion Sector Model II: Fermions with Finite Extension into the Bulk {#sec:fermions2}
---------------------------------------------------------------------
Drawing on the analogy of the gauge action (\[eq:5daction\]), which has $SU(2)$ and $U(1)$ kinetic terms peaked at the two ends of the interval and connected through the bulk kinetic term, we consider a theory with left-handed and right-handed fermion kinetic terms peaked at the two ends of the interval and connected through a bulk fermion kinetic term[^3]: $$\begin{aligned}
{\cal S}^{(II)} = \int_0^{\pi R}dy && \int d^4x
\left[{1\over\pi R}\left({i\over2}\bar{\psi}\Gamma^M D_M\psi + h.c.
-M\bar{\psi}\psi\right)\right.\nonumber\\
&&\left.
+\delta(y){1\over t_L^2}i\bar{\psi}_L\sla{D}\psi_L
+\delta(\pi R-y)\left({1\over t_{\nu_R}^2}i\bar{\nu}_R\sla{D}\nu_R
+{1\over t_{e_R}^2}i\bar{e}_R\sla{D}e_R\right)
\right]\ ,
\label{eq:fermionaction2}\end{aligned}$$ The five-dimensional Dirac matrices are defined in terms of the four-dimensional ones by $\Gamma^M = (\gamma^\mu,-i\gamma^5)$. The five-dimensional fermion is equivalent to a four-dimensional Dirac fermion, $\psi=(\psi_L,\psi_R)$, where $\psi_L$ and $\psi_R$ are $SU(2)$ doublets, $\psi_L=(\nu_L,e_L)$ and $\psi_R=(\nu_R,e_R)$. The boundary kinetic energy term at $y=0$ is defined by interpreting the $\delta$-function as $\delta(y-\epsilon)$ for $\epsilon\rightarrow0^+$ with the boundary condition $\psi_R=0$ at $y=0$. Similarly, the boundary term at $y=\pi R$ is defined by interpreting the $\delta$-function as $\delta(\pi R-y+\epsilon)$ with the boundary condition $\psi_L=0$ at $y=\pi R$. The general treatment of possible fermion boundary conditions can be found in Ref. [@Csaki:2003sh].
The covariant derivative in Eq. (\[eq:fermionaction2\]) is $$D_M\psi = \left(\partial_M - iT^aW_M^a(y)-iY_LW_{M}^3(\pi R)\right)\psi,
\label{eq:bulkderiv}$$ where $Y_L$ is the $\psi_L$ hypercharge. At the interval boundaries the four-dimensional part of the covariant derivative (\[eq:bulkderiv\]) becomes: $$\begin{aligned}
(\sla{D}\psi_L)_{y=0} & = & \left(\sla{\partial}-iT^a\sla{W}^a(0)-iY_L\sla{W}^3(\pi R)\right)\psi_L \nonumber \\
(\sla{D}\psi_R)_{y=\pi R} & = & \left(\sla{\partial}-iT^3\sla{W}^3(\pi R)-iY_L\sla{W}^3(\pi R)\right)\psi_R \nonumber \\
& = & \left(\sla{\partial}-iY_R\sla{W}^3(\pi R)\right)\psi_R \ ,
\label{eq:derivends}\end{aligned}$$ where the $\psi_R$ hypercharge, $Y_R$, is related to $Y_L$ by $Y_R=T^3+Y_L$, as in the SM. Note that $Y_R$ is a 2$\times$2 diagonal matrix, with the $\nu_R$ hypercharge on the upper left, and the $e_R$ hypercharge on the lower right. Therefore, at $y=\pi R$ the covariant derivative term, $\bar{\psi}_R\sla{D}\psi_R$, splits into two separately gauge invariant terms, $\bar{\nu}_R\sla{D}\nu_R$ and $\bar{e}_R\sla{D}e_R$, as in Eq.(\[eq:fermionaction2\]). Note also that in the limit of small $t_L$, $t_{\nu_R}$, and $t_{e_R}$ the action ${\cal S}^{(II)}$ describes massless left-handed fermions gauged under an $SU(2)\times U(1)$ group living on the left end of the fifth-dimensional interval, and massless right-handed fermions gauged under a $U(1)$ living on the right end of the interval, exactly as in model I. It is the presence of the bulk fields which allow these light states to communicate with each other, supplying the analog of the Yukawa coupling of the SM, and giving mass to the fermions.
Letting $\chi$ denote either the five-dimensional electron or neutrino field, we can expand in a tower of four-dimensional KK states: $$\begin{aligned}
\chi_L(x,y)&=& \sum_{n=0}^\infty \alpha_n(y)\chi_{nL}(x)\nonumber\\
\chi_R(x,y)&=& \sum_{n=0}^\infty \beta_n(y)\chi_{nR}(x) \ .
\label{eq:fermionKK}\end{aligned}$$ The four dimensional fields $\chi_{nL}$ and $\chi_{nR}$ are the left-handed and right-handed components, respectively, of a mass-$m_n$ Dirac fermion, $(\chi_{nL},\chi_{nR})$. The heavy fermions, labeled by $n>0$ are just standard bulk fermions with masses determined by the bulk mass and the boundary conditions, and only slightly perturbed by the brane terms. The light fermions, however, are dominated by the brane terms, with only small extension into the bulk set by $t_L$ and $t_{\chi_R}$, which allows a light mass to exist. Solving perturbatively in $t_L$ and $t_{\chi_R}$ we find a light Dirac state with mass $$m_{\chi_0} = {t_L t_{\chi_R}e^{-(M\pi R)}\over\pi R}\left(1+ {\cal O}(t^2)
\right) \ .
\label{eq:mass2}$$
The fermion sector of model II can accomodate both leptons and quarks, as well as multiple generations and generational mixing. In this work we shall assume that $t_L$ and the bulk mass $M$ are universal for all fermions, so that the masses and mixing are determined by the $t_{\chi_R}$. Assuming $t_L$ to be of the same size as $\lambda\equiv g/\hat{g}_5$ and $M$ not too large, this implies that the $t_{\chi_R}$ are very small, except for the third generation. For example, with $R^{-1}\sim1$ TeV and $t_L\sim \lambda\sim10^{-1}$, we find that $t_{\chi_R}$ ranges from $10^{-11}$ for the lightest neutrino to $10^{-2}$ for the charm quark. We shall postpone the discussion of the details of the fermion masses and mixings, as well as problems associated with the third generation, to a followup paper [@followup]. For the remainder of this article we assume that $t_{\chi_R}$ is neglible and examine the effects of a universal and small, but non-neglible, value of $t_L$.
Electroweak Constraints {#sec:constraints}
=======================
In Ref. [@FGS] we showed that the electroweak precision constraints in model I with brane-localized fermions could be parametrized to order $\lambda^2$ fully in terms of the oblique parameters $S$, $T$, and $U$ [@PT]. This was possible because the couplings between the light fermions and the new heavy vector bosons are suppressed by a factor $\lambda$ relative to the couplings to the standard model $W$ and $Z$. As a result, the contributions of the heavy vector bosons to four-fermion operators at zero-momentum transfer are suppressed by a factor of $\lambda^4$ relative to the standard model $W$ and $Z$ contributions, and can be neglected. (There is a relative factor of $\lambda^2$ from the couplings and an additional factor of $\lambda^2$ due to the the heavy vector boson masses in the boson propagator.) To order $\lambda^2$, we only need consider the couplings to the standard model $W$ and $Z$ for electroweak precision measurements. Any additional universal parameters, such as those considered in Ref. [@Barbieri:2004qk] can be neglected.
This result also applies to model II with light fermion extension into the bulk, if we make the two simple assumptions that we introduced earlier. Firstly, if we assume that $t_L\approx\lambda$, then the coupling between light fermions and the heavy vector bosons remains suppressed by a factor of $\lambda$, although the coefficient multiplying this factor is changed. (Effectively, the enhanced overlap of the bulk fermion wavefunctions with the heavy vector boson wavefunctions is compensated by the small probability for the light fermions to leak into the bulk, producing a change in this coefficient of order $t_L^2/\lambda^2\sim1$.) Secondly, if we assume that $t_{\chi_R}$ is negligible, then there are no additional right-handed charged or neutral currents beyond those that occur in the standard model. The couplings of the light fermions and the standard model $W$ and $Z$ can be described by the following interaction lagrangians, $$\begin{aligned}
{\cal L}_{CC}&=& {g^{CC}\over\sqrt{2}}W^{+\mu}\,\bar{\nu}\gamma_\mu P_Le\ +\ {\rm h.c.}\nonumber\\
{\cal L}_{NC}&=& Z^{\mu}\,\bar{\psi}
\left[g^{NC}_{3}T^3\gamma_\mu P_L
+g^{NC}_{Q}Q\gamma_\mu
\right]\psi\ +\ {\rm h.c.}\ .
\label{eq:CCLag}\end{aligned}$$
With these assumptions, we can now parametrize the influence of the new physics on the couplings in terms of $S$, $T$, and $U$. We shall take $\alpha$, $m_W$, and $m_Z$ as the fundamental input observables, since their relation to the parameters in the lagrangian is independent of how the fermions are incorporated into the theory. This is trivially seen to be true for $m_W$ and $m_Z$, while the flatness of the photon wave function imposes that $$\begin{aligned}
e^2&=& \left({1\over \hat{g}_5^2}+{1\over g^2}+{1\over g^{\prime2}}\right)^{-1}\ ,
\label{eq:em1}\end{aligned}$$ independent of fermion model. Following ref. [@burgess], we find the deviations in the relations between the universal $W$ and $Z$ couplings to be $$\begin{aligned}
g^{CC}&=& {e\over s}\left[1+{\alpha S\over4s^2}-
{c^2\alpha T\over2s^2}-{(c^2-s^2)\alpha U\over8s^4}\right]\nonumber\\
g^{NC}_{3}&=& {e\over sc}\left[1+{\alpha S\over4s^2}-
{(c^2-s^2)\alpha T\over2s^2}-{(c^2-s^2)\alpha U\over8s^4}\right]
\nonumber\\
g^{NC}_{Q}&=& -{es\over c}\left[1+{\alpha T\over2s^2}+{\alpha U\over8s^4}\right]
\ .
\label{eq:STUdef}\end{aligned}$$ In these expressions we have defined $c\equiv m_W/m_Z$ and $s\equiv(1-c^2)^{1/2}$. Note that our choice of definition for $\sin^2{\theta_W}$ is different from that used in Ref. [@burgess].
In model I with brane-localized fermions the couplings are determined by the values of the light boson wavefunctions at the boundaries $$\begin{aligned}
g^{CC(I)}&=& f_0(0)\nonumber\\
g^{NC(I)}_{3}&=&g_0(0)-g_0(\pi R)\nonumber\\
g^{NC(I)}_{Q}&=& g_0(\pi R)\ .
\label{eq:couplingsWF1}\end{aligned}$$ Solving perturbatively for these normalized wave functions, we find $$\begin{aligned}
g^{CC(I)}&=& {e\over s}\left[1+\lambda^2/6 +{\cal O}(\lambda^4)\right]\nonumber\\
g^{NC(I)}_{3}&=& {e\over sc}\left[1+\lambda^2/6 +{\cal O}(\lambda^4)\right]\nonumber\\
g^{NC(I)}_{Q}&=& -{es\over c}\left[1+{\cal O}(\lambda^4)\right]
\ .
\label{eq:couplings1}\end{aligned}$$ Thus, we obtain for this theory $$\begin{aligned}
\alpha S&=& 2s^2\lambda^2/3\nonumber\\
\alpha T&=&0\nonumber\\
\alpha U&=&0\ ,
\label{eq:STU1}\end{aligned}$$ as previously found in Ref. [@FGS].
We now consider the couplings in fermion model II with finite extension into the bulk. They can be expressed as $$\begin{aligned}
g^{CC(II)}&=&g^{CC(I)} \int_0^{\pi R} dy\left[{1\over\pi R}+{1\over t_L^2}\delta(y)\right]
\left({f_0(y)\over f_0(0)}\right)\alpha(y)^2\nonumber\\
g^{NC(II)}_{3}&=&g^{NC(I)}_{3}\int_0^{\pi R} dy\left[{1\over\pi R}+{1\over t_L^2}\delta(y)\right]
\left({g_0(y)-g_0(\pi R)\over g_0(0)-g_0(\pi R)}\right)\alpha(y)^2\nonumber\\
g^{NC(II)}_{Q}&=& g^{NC(I)}_{Q}\ ,
\label{eq:couplings2p}\end{aligned}$$ where we have factored out the values from model I for clarity. (Note that we have assumed that all of the left-handed fermion wave functions are identical, which is valid in the limit of negligible $t_{\chi_R}$.) The change in the $g^{CC}$ and $g^{NC}_3$ couplings between model I and model II is, in fact, a suppression factor. This is seen from the fact that the fermion wave functions are normalized by $$\begin{aligned}
1\ =\ \int_0^{\pi R} dy\left[{1\over\pi R}+{1\over t_L^2}\delta(y)\right]
\alpha_(y)^2\ ,
\label{eq:fermnorm}\end{aligned}$$ while the factors in parentheses are positive and less than one: $$0\le\qquad {f_0(y)\over f_0(0)} \approx {g_0(y)-g_0(\pi R)\over g_0(0)-g_0(\pi R)}
\approx 1-{y\over\pi R}\qquad \le1\ .$$ The suppression factors for $g^{CC}$ and $g^{NC}_3$ are identical to leading order in $\lambda^2$.
Evaluating the integrals, we obtain $$\begin{aligned}
g^{CC(II)}&=&g^{CC(I)} (1-At_L^2)\nonumber\\
g^{NC(II)}_{3}&=&g^{NC(I)}_{3}(1-At_L^2)\nonumber\\
g^{NC(II)}_{Q}&=& g^{NC(I)}_{Q}\ ,
\label{eq:couplings2}\end{aligned}$$ where $$\begin{aligned}
A&=&e^{-\hat{M}}{\sinh\hat{M}\over\hat{M}}
-{1\over2\hat{M}}\left(1-e^{-\hat{M}}{\sinh\hat{M}\over\hat{M}}\right)
\ ,
\label{eq:CCsm}\end{aligned}$$ and $\hat{M}=M\pi R$ is the scaled bulk mass. In the limit $\hat{M}\rightarrow0$ we find $A\rightarrow1/2$ . We now see that allowing the fermions to extend into the bulk, as in model II, can be used to cancel the effects of $S$ in electroweak measurements. Comparing Eq. (\[eq:couplings2\]) with Eq. (\[eq:couplings1\]), we see that $S$ can effectively be set to zero (while retaining $T=U=0$) by the choice $$t_L^2\ =\ {\lambda^2\over6A}\ .$$
Conclusions {#sec:conclusions}
===========
In models of Higgsless electroweak symmetry breaking it is straightforward to incorporate a custodial symmetry, which naturally ensures that $T=0$. However, if the standard model fermions are localized to the branes, the contribution to $S$ is typically sizable. In the analysis of Ref. [@FGS] and reproduced as model I in this paper, the contribution to $S$ is proportional to $\lambda^2$, where $\lambda$ is the ratio of the brane to bulk gauge couplings. The quantity $\lambda^2$ is fixed to be of order $10^{-2}$ by the fact that the standard model gauge bosons have the right mass, while the first KK vector boson must have a mass below about 1 TeV in order to preserve unitarity up to some reasonable scale.
In this paper we have extended the analysis of Ref. [@FGS] to include the effects of light fermion extension into the bulk with large fermion brane kinetic terms in model II (with coefficient $1/t_L^{2}$), in direct analogy to the gauge sector. This has no effect on the custodial symmetry, thus keeping $T=0$, but it does produce a new contribution to $S$ which is proportional to $t_L^2$ and of opposite sign to the previous contribution proportional to $\lambda^2$. Therefore, it is possible to obtain a cancellation between these contributions. Of course, to obtain $S=0$ would appear to require fine tuning of two independent parameters, $\lambda^2$ and $t_L^2$. At the moment, we do not have a precise mechanism to produce this tuning naturally; however, we do note that the quantities play suggestively analogous roles in the two different sectors of the theory. That is, the quantity $\lambda^2$ is a measure of the extent to which the gauge fields leak into the bulk away from the brane at $y=0$, while the quantity $t_L^2$ is a measure of the extent to which the left-handed fermions leak into the bulk away from the brane at $y=0$. This offers hope that these parameters might be correlated and cancel naturally in some future model. In any event, we offer this as a proof of principle of a Higgsless model with both $S$ and $T$ set to zero.
In model II the light fermion masses-squared are suppressed with respect to the natural size of $(\pi R)^{-2}$ by the amount that the left-handed and right-handed fermions can leak away from the branes at $y=0$ and $y=\pi R$ respectively. That is, they are suppressed by the factor $t_L^2t_{\chi_R}^2$. Given that $t_L$ is constrained to keep $S$ fixed at zero, the masses are determined by $t_{\chi_R}$. We have found that all the SM-fermion masses can be obtained, while keeping $t_L$ universal, except for the case of the top quark. The corresponding values of $t_{\chi_R}$ are negligible in the calculation of electroweak observables. The details of the mass spectrum, as well as a discussion of the top-quark problem will be postponed to a followup paper [@followup].
Finally, we want to stress the generality of this approach. For simplicity, we have considered a model from continuum theory space. However, this picture can be generalized to any 5-dimensional models, with flat or warped backgrounds. For example, one can apply it to the model of Ref. [@Barbieri:2003pr], by noting that their model can be obtained as a generalization of our continuum theory space model. If we start with our model and “fold it in half” by identifying the points $y$ and $\pi R-y$, we can treat the $SU(2)$ symmetries for $y<\pi R/2$ and $y>\pi R/2$ as independent gauge groups, $SU(2)_L$ and $SU(2)_R$ (connected by an appropriate boundary condition at the reflection point $y=\pi R/2$). The $U(1)$ symmetry at $y=\pi R$ in our model can then be extended to range over $0\le y\le\pi R/2$ to obtain the model of Ref. [@Barbieri:2003pr]. This same “folding” procedure can also be applied to the fermions in our model II, producing two sets of bulk fermions, both with boundary kinetic terms at $y=0$, and connected by an appropriate boundary condition at $y=\pi R/2$. Given the generality of this approach to satisfying the electroweak precision constraints, further investigations in this direction are worthwhile, with particular attention to the constraints imposed by top-quark phenomenology.
While we were completing this manuscript, a paper [@Cacciapaglia:2004rb] appeared on the ArXiv that discusses localization of fermions as a way to address electroweak constraints in the context of a Higgsless model in warped space. The general conclusions in that paper are similar to ours, although the details are quite different. They also include some discussion of flavor issues and difficulties with the third generation in the warped-space model.
.2 cm
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the US National Science Foundation under grants PHY-0070443 and PHY-0244789. We would like to thank Sekhar Chivukula for useful discussions.
[99]{} C. Csaki, C. Grojean, H. Murayama, L. Pilo and J. Terning, Phys. Rev. D [**69**]{}, 055006 (2004) \[arXiv:hep-ph/0305237\]. R. Sekhar Chivukula, D. A. Dicus and H. J. He, Phys. Lett. B [**525**]{}, 175 (2002) \[arXiv:hep-ph/0111016\];\
R. S. Chivukula, D. A. Dicus, H. J. He and S. Nandi, Phys. Lett. B [**562**]{}, 109 (2003) \[arXiv:hep-ph/0302263\]. R. Foadi, S. Gopalakrishna and C. Schmidt, JHEP [**0403**]{}, 042 (2004) \[arXiv:hep-ph/0312324\]. N. Arkani-Hamed, A. G. Cohen and H. Georgi, Phys. Rev. Lett. [**86**]{}, 4757 (2001) \[arXiv:hep-th/0104005\];\
C. T. Hill, S. Pokorski and J. Wang, Phys. Rev. D [**64**]{}, 105005 (2001) \[arXiv:hep-th/0104035\]. R. Barbieri, A. Pomarol and R. Rattazzi, Phys. Lett. B [**591**]{}, 141 (2004) \[arXiv:hep-ph/0310285\]. M. E. Peskin and T. Takeuchi, Phys. Rev. D [**46**]{}, 381 (1992).
C. Csaki, C. Grojean, L. Pilo and J. Terning, Phys. Rev. Lett. [**92**]{}, 101802 (2004) \[arXiv:hep-ph/0308038\];\
G. Cacciapaglia, C. Csaki, C. Grojean and J. Terning, arXiv:hep-ph/0401160. R. S. Chivukula, E. H. Simmons, H. J. He, M. Kurachi and M. Tanabashi, arXiv:hep-ph/0406077. R. Foadi, S. Gopalakrishna and C. Schmidt, In preparation. C. Csaki, C. Grojean, J. Hubisz, Y. Shirman and J. Terning, Phys. Rev. D [**70**]{}, 015012 (2004) \[arXiv:hep-ph/0310355\]. R. Barbieri, A. Pomarol, R. Rattazzi and A. Strumia, arXiv:hep-ph/0405040;\
R. S. Chivukula, E. H. Simmons, H. J. He, M. Kurachi and M. Tanabashi, arXiv:hep-ph/0408262. C. P. Burgess, S. Godfrey, H. Konig, D. London and I. Maksymyk, Phys. Rev. D [**49**]{}, 6115 (1994) \[arXiv:hep-ph/9312291\]. G. Cacciapaglia, C. Csaki, C. Grojean and J. Terning, arXiv:hep-ph/0409126.
[^1]: The idea of fermion de-localization as a potential mechanism to ease constraints from electroweak precision measurements was mentioned, but not pursued, in Ref. [@Barbieri:2003pr].
[^2]: Note that we have taken $y\leftrightarrow \pi R -y$ with respect to the action in ref. [@FGS]. We have also scaled out a factor $\pi R$ in the first term in order to make $\hat{g}_5$ dimensionless.
[^3]: Note that we have scaled out a factor $\pi R$ in the bulk integral, in order for the parameters $t_L$, $t_{\nu_R}$, and $t_{e_R}$to be dimensionless.
|
MZ-TH/04-05\
[**Goodness-of-fit tests in many dimensions**]{} [André van Hameren]{}
[*Institut für Physik, Johannes-Gutenberg-Universität,\
Staudinger Weg 7, D-55099 Mainz, Germany*]{}
[andrevh@thep.physik.uni-mainz.de]{}
[**Abstract**]{}\
Introduction
============
Goodness-of-fit ([GOF]{}) tests are designed to test the hypothesis that a sample of data is distributed following a given probability density function ([PDF]{}). The sample could, for example, consist of results of a repeated experiment, and the [PDF]{} could represent the theoretical prediction for the distribution of these results. The test consists of the evaluation of a function of the data, the [GOF]{} [*statistic*]{}, and the qualification of this result using the probability distribution of all possible results when the hypothesis is true, the [*test-distribution*]{} ([TD]{}). Despite the consensus that [GOF]{} tests are crucial for the validation of models in the scientific process, their success is mainly restricted to one-dimensional cases, that is, to situations in which the data-points have only one degree of freedom. The quest for [GOF]{} tests useful in situations where the number ${\mathit{dim}}$ of dimensions is larger than one still continues [@AslanZech2002; @AslanZech2002-2; @Raja2003].
In the following, we will see that the difficulty with [GOF]{} tests in many dimensions is to keep them [*distribution-free*]{}, that is, to construct them such that the [TD]{} is independent of the [PDF]{}.[^1] We will, however, also see how [GOF]{} tests can be constructed such that the asymptotic [TD]{}, in the limit of an infinite sample size, has a Gaussian limit for ${\mathit{dim}}\to\infty$ for any [PDF]{}, so that it only depends on the expectation value and the variance of the [GOF]{} statistic this limit. Finally, we will encounter an explicit example for which the asymptotic [TD]{}depends, for any [PDF]{}, as good as only on the expectation value and the variance of the statistic for any ${\mathit{dim}}>1$.
The structure of goodness-of-fit tests
======================================
A [GOF]{} statistic is a function ${T}_{{N}}$ of the data sample ${\{{\omega}_i\}_{i=1}^{{N}}}$ constructed such that, under the hypothesis that the data are distributed in a space ${\Omega}$ following the theoretical [PDF]{}${P}$, there is a number ${t_{\infty}}$ such that $$\lim_{{N}\to\infty}{T}_{{N}}({\{{\omega}_i\}_{i=1}^{{N}}})
= {t_{\infty}}\;\;.$$ The initial, naïve, trust in its usefulness stems from the idea that, for a sample of finite size, the value of ${T}_{{N}}$ should be close to ${t_{\infty}}$ if the data are distributed following ${P}$, and that the value of ${T}_{{N}}$ is probably not so close to ${t_{\infty}}$ if the data are not distributed following ${P}$. This idea immediately leads to the question what is “close”, which can be answered by the [*test-distribution*]{} ([TD]{}) $${\mathcal{P}}_{{N}}(t)
= \int {\delta}(\,t-{T}_{{N}}({\{{\omega}_i\}_{i=1}^{{N}}})\,)
\,\prod_{i=1}^{{N}}{P}({\omega}_{i})d{\omega}_{i}
\;\;,
\label{testdistribution}$$ where each integration variable ${\omega}_{i}$ runs over the whole space ${\Omega}$, and ${\delta}$ denotes the Dirac distribution. ${\mathcal{P}}_{{N}}$ gives the probability distribution of the value of ${T}_{{N}}$ under the hypothesis that the data are indeed distributed following ${P}$. If it is very low at the value of ${T}_{{N}}$ for the empirical data, then the hypothesis that these data are distributed following ${P}$ has to be rejected; it would under that hypothesis be very improbable to get such a value. In fact, knowledge of the value of the number ${t_{\infty}}$ is not necessary. One only needs to know where the bulk of the [TD]{} is.
The evaluation of ${T}_{{N}}({\{{\omega}_i\}_{i=1}^{{N}}})$ and the qualification of this result with ${\mathcal{P}}_{{N}}$ constitute a [GOF]{} test. Notice that the [TD]{} is also necessary to qualify ${T}_{{N}}$ itself: it should consist of a peak around the expectation value[^2] $${E}({T}_{{N}}) = \int{T}_{{N}}({\{{\omega}_i\}_{i=1}^{{N}}})
\,\prod_{i=1}^{{N}}{P}({\omega}_{i})d{\omega}_{i}
\;\;.$$ If, for example, ${\mathcal{P}}_{{N}}$ is almost flat, then the test is useless since any data sample will lead to a value of ${T}_{{N}}$ that is equally probable and the test is not capable of distincting them.
Difficulty in many dimensions
-----------------------------
The difficulty with the construction of [GOF]{} tests for ${\mathit{dim}}>1$ is that it is in general very hard to calculate ${\mathcal{P}}_{{N}}$. There is a way to avoid this, by using the the distribution from the case that ${P}$ is constant. One then needs a mapping ${\varphi}$ of the data-points such that the determinant of the Jacobian matrix of this mapping is equal to ${P}$: $$\left|\det\frac{\partial X_{k}({\varphi}({\omega}))}{\partial X_{l}({\omega})}\right|
={P}({\omega})
\;\;,$$ where $X_{k}({\omega})$ is the $k$-th coordinate of data-point ${\omega}$. Under the hypothesis that the original data are distributed following ${P}$, the mapped data are distributed following the uniform distribution. For ${\mathit{dim}}=1$, this mapping is simply given by the integrated [PDF]{}, or probability [*distribution*]{} function $${\varphi}({\omega}) = \int_{-\infty}^{{\omega}}{P}({\omega}')\,d{\omega}'
\;\;,$$ since $$\begin{aligned}
\int{\delta}(\,t-{T}_{{N}}({\{{\varphi}({\omega}_i)\}_{i=1}^{{N}}})\,)
\,\prod_{i=1}^{{N}}{P}({\omega}_{i})d{\omega}_{i}
&=&\int{\delta}(\,t-{T}_{{N}}({\{{\varphi}_i\}_{i=1}^{{N}}})\,)
\,\prod_{i=1}^{{N}}d{\varphi}_{i}
\nonumber\\
&=&{\mathcal{P}}^{\mathrm{uniform}}_{{N}}(t)
\;\;,
\nonumber\end{aligned}$$ where each integration variable ${\varphi}_i$ runs from $0$ to $1$. ${\mathcal{P}}^{\mathrm{uniform}}_{{N}}$ is, for popular tests, known in the limit ${N}\to\infty$. This asymptotic distribution ${\mathcal{P}}^{\mathrm{uniform}}_{\infty}$ is assumed not to be too different from ${\mathcal{P}}^{\mathrm{uniform}}_{{N}}$. Tests for which this method can be applied are called [*distribution-free*]{}.
Crude solution
--------------
For ${\mathit{dim}}>1$, finding the mapping mentioned before is in general even more difficult than finding ${\mathcal{P}}_{{N}}$. At least an estimate of ${\mathcal{P}}_{{N}}$ can be found using a straightforward Monte Carlo technique: one just has to generate ‘theoretical data samples’ the data-points of which are distributed following ${P}$ and make a histogram of the values of ${T}_{{N}}$ with these samples. Depending on how accessible the analytic structure of ${P}$ is, several techniques exist for generating the theoretical samples. In the worst case that ${P}$ is just given as a ‘black box’, the Metropolis-Hastings method can be used, possibly with its efficiency improved by techniques as suggested in [@Abraham1999]. Notice that one does not need extremely many samples, since one is, for this purpose, interested in the bulk of the distribution, not in the tails.
Even with modern computer power, however, this Monte Carlo method can become very time consuming, especially for large ${N}$ and large ${\mathit{dim}}$. In the next section, we will see how practical [GOF]{} statistics for ${\mathit{dim}}>1$ can be constructed for which the asymptotic [TD]{} can be obtained in a more efficient way.
Construction of goodness-of-fit statistics in many dimensions
=============================================================
Several [GOF]{} statistics for the uniform distribution in many dimensions exist. They are called [*discrepancies*]{} [@Tichy1997] and intensively studied in the field of Quasi Monte Carlo integration [@Niederreiter1992; @QuasiMonteCarlo], for which one uses [*low-discrepancy sequences*]{} of multi-dimensional integration-points. These sequences give a faster convergence than expected from the common theory of Monte Carlo integration, because they are distributed ‘more uniformly’ than uniformly distributed random sequences; they give a [GOF]{} that is ‘unacceptably good’. When ${\mathit{dim}}=1$, discrepancies can be used directly as [GOF]{} tests for general [PDF]{}s using the ‘mapping method’ mentioned before, and indeed, the [*Kolmogorov-Smirnov*]{} statistic is equivalent to the [*$^*$-discrepancy*]{}, and the [*Cramér-von Mises*]{} statistic is equivalent to the [*$L_{2}^{*}$-discrepancy*]{}.
In the following, we will have a look at the structure of discrepancies, and we will see how they can be deformed into [GOF]{} statistics for general [PDF]{}s.
The structure of discrepancies
------------------------------
Discrepancies anticipate the fact that, if a sequence ${\{{\omega}_{i}\}_{i=1}^{{N}}}$ is uniformly distributed in a space ${\Omega}$ and ${N}$ becomes large, then the average of a integrable function over the sequence should converge to the integral over ${\Omega}$ of the function: $${\langle{f}\rangle_{{N}}} \to {\langle{f}\rangle}
\quad\textrm{for ${N}\to\infty$}
\;\;,$$ where $${\langle{f}\rangle_{{N}}} = \frac{1}{{N}}\sum_{i=1}^{{N}}{f}({\omega}_{i})
\quad\textrm{and}\quad
{\langle{f}\rangle} = \int_{{\Omega}}{f}({\omega})d{\omega}\;\;.$$ Thus a class of functions ${\mathcal{H}}$ and a measure ${\mu}$ on ${\mathcal{H}}$ are chosen, and the discrepancy is defined as $${D}_{{N}} = \bigg(\;
\int_{{\mathcal{H}}}|{\langle{f}\rangle_{{N}}} - {\langle{f}\rangle}|^{r}\,{\mu}(d{f})
\;\bigg)^{1/{r}}
\;\;.
\label{defDisc}$$ So it is the integration error measured in a class of functions. For example, ${\mathcal{H}}$ could consist of indicator functions of a family ${\mathcal{S}}$ of subsets of ${\Omega}$ with ${\mu}$ such that for ${r}\rightarrow\infty$ $${D}_{{N}} = \sup_{{S}\in{\mathcal{S}}}
\bigg|\frac{1}{{N}}\sum_{{\omega}_{i}\in{S}}1
- \int_{{S}}d{\omega}\bigg|
\;\;.$$ In this case, the discrepancy is the maximum error made, if the volume of each subset is estimated using ${\{{\omega}_{i}\}_{i=1}^{{N}}}$. Especially interesting are the [*quadratic*]{} discrepancies [@vanHameren2001], for which ${r}=2$, so that they are completely determined by the two-point function of ${\mu}$: $${D}_{{N}} = \bigg(\;
\frac{1}{{N}^2}\sum_{i,j=1}^{{N}}{b}({\omega}_{i},{\omega}_{j})
\;\bigg)^{1/2}
\;\;,$$ with $${b}({\omega}_{1},{\omega}_{2})
= {c}({\omega}_{1},{\omega}_{2})
-\int_{{\Omega}}[{c}({\omega}_{1},{\omega})+{c}({\omega}_{2},{\omega})]\,d{\omega}+\int_{{\Omega}}\!\int_{{\Omega}}
{c}({\omega},\eta)\,d{\omega}d\eta
,$$ where $${c}({\omega}_1,{\omega}_2)
= \int_{{\mathcal{H}}}{f}({\omega}_1){f}({\omega}_2)\,{\mu}(d{f})
\;\;.$$ So the discrepancy is the sum of the correlations of all pairs of data-points, measured with correlation function ${b}$. If the measure ${\mu}$ itself is completely determined by its two-point function, it is called [*Gaussian*]{}.
From discrepancies to [GOF]{} statistics
----------------------------------------
Discrepancies are usually constructed in order to test the uniformity of sequences in a ${\mathit{dim}}$-dimensional hyper-cube $[0,1)^{{\mathit{dim}}}$. We are interested in more general cases, in which we want to test whether a sample ${\{{\omega}_{i}\}_{i=1}^{{N}}}$ of data-points is distributed in a space ${\Omega}$ following a given [PDF]{} ${P}$. We will assume that there exists an invertible mapping ${\varphi}$ which maps these data-points onto points ${\varphi}({\omega}_{i})\in[0,1)^{{\mathit{dim}}}$ and for which the determinant ${J}$ of the Jacobian matrix is known. The hypothesis dictates that the mapped points are distributed in the hyper-cube following $({P}\circ{\varphi}^{-1})/({J}\circ{\varphi}^{-1})$. We will denote this [PDF]{} by ${P}$ itself from now on, the sample of mapped data-points by ${\{{\omega}_{i}\}_{i=1}^{{N}}}$ and the hyper-cube by ${\Omega}$.
We want to use the idea, introduced before, to analise a data sample ${\{{\omega}_{i}\}_{i=1}^{{N}}}$ by looking at ${\langle{f}\rangle_{{N}}}$ for different functions ${f}$. We will just have to keep in mind that, if ${\{{\omega}_{i}\}_{i=1}^{{N}}}$ is distributed following ${P}$, then $${\langle{f}\rangle_{{N}}} \to {\langle{f}{P}\rangle}
\quad\textrm{for ${N}\to\infty$}
\;\;,$$ where ‘${f}{P}$’ denotes point-wise multiplication. This and the definition of the discrepancies lead us to define the statistic $${T}_{{N}} = \bigg(\;
\int_{{\mathcal{H}}}| {\langle{f}{Q}\rangle_{{N}}}
- {\langle{f}{Q}{P}\rangle}|^{r}\,{\mu}(d{f})
\;\bigg)^{1/{r}}
\;\;,$$ where we inserted the function ${Q}$ for flexibility. It could be absorbed in the definition of ${\mu}$, but we prefer this formulation, in which we can stick to known examples for ${\mu}$. We will see later on that the ideal choice for ${Q}$ is $${Q}= 1/\sqrt{{P}}
\;\;.$$ We want to focus on the quadratic discrepancies for which ${\mu}$ is Gaussian from now on. Like in [@vanHameren2001], we shall define the statistic itself as an average case complexity, and not as a square-root of an average: $$\begin{aligned}
{T}_{{N}}
&=& {N}\int_{{\mathcal{H}}}| {\langle{f}{Q}\rangle_{{N}}}
- {\langle{f}{Q}{P}\rangle}|^2
\,{\mu}(d{f})
\label{defgof}\\
&=& \frac{1}{{N}}\sum_{i,j=1}^{{N}}{Q}{c}({\omega}_{i},{\omega}_{j})
- 2\int_{{\Omega}}\bigg(\;\sum_{i=1}^{{N}}
{Q}{c}({\omega}_{i},{\omega})\;\bigg){P}({\omega})d{\omega}\nonumber\\ &&\hspace{130pt}
- {N}\int_{{\Omega}}\!\int_{{\Omega}}
{Q}{c}({\omega},\eta)\,{P}({\omega}){P}(\eta)d{\omega}d\eta
\;\;,
\label{defgof1}\end{aligned}$$ where $${Q}{c}({\omega}_{1},{\omega}_{2})
= {Q}({\omega}_{1}){Q}({\omega}_{2})\,{c}({\omega}_{1},{\omega}_{2})
\;\;.$$ The reason for the extra factor ${N}$ becomes clear when we calculate the expectation value of ${T}_{{N}}$. Assuming that the data-points are distributed independently following ${P}$, it is given by $${E}({T}_{{N}})
= \int_{{\Omega}}{Q}{c}({\omega},{\omega})\,{P}({\omega})d{\omega}- \int_{{\Omega}}\!\int_{{\Omega}}
{Q}{c}({\omega}_1,{\omega}_2)\,{P}({\omega}_1){P}({\omega}_2)d{\omega}_1d{\omega}_2
\;\;.$$ So it is independent of ${N}$ and the statistic is not biased. In order to write down the variance, we shorten the notation such that the expectation value can be written as $${E}({T}_{{N}})
= {\langle{Q}{c}_{1,1}{P}_{1}\rangle}
- {\langle{Q}{c}_{1,2}{P}_{1}{P}_{2}\rangle}
\;\;,
\label{expectationvalue}$$ and the variance is given by $$\begin{aligned}
{V}({T}_{{N}})
= \bigg(1-\frac{2}{{N}}\bigg)\bigg(
&&{\langle\,( {Q}{c}_{1,2}{Q}{c}_{1,2}
+{Q}{c}_{1,1}{Q}{c}_{1,2}) {P}_{1}{P}_{2}\,\rangle}
\nonumber\\
\quad&-&{\langle\,( {Q}{c}_{1,1}{Q}{c}_{2,3}
+4{Q}{c}_{1,2}{Q}{c}_{2,3})
{P}_{1}{P}_{2}{P}_{3}\,\rangle}
\nonumber\\
\quad&+& 3{\langle\,{Q}{c}_{1,2}{Q}{c}_{3,4}
{P}_{1}{P}_{2}{P}_{3}{P}_{4}\,\rangle}
\hspace{70pt}\bigg)
\;\;.
\label{variance}\end{aligned}$$
Notice that the formulation with Gaussian measures on the function class corresponds to a natural interpretation of the average of a square: given a sequence ${({u}_{n})_{n=1}^{{M}}}$ of functions, a sequence ${({\sigma}_{n}^2)_{n=1}^{{M}}}$ of positive weights and a linear operation ${L}$, we have $$\sum_{n=1}^{{M}}{\sigma}_{n}^2{L}({u}_{n})^2
= \int{L}\bigg(\,\sum_{n=1}^{{M}}x_{n}{u}_{n}\,\bigg)^2
\,\exp\bigg(-\sum_{n=1}^{{M}}\frac{x_{n}^2}{2{\sigma}_{n}^2}
\bigg)\,\prod_{n=1}^{{M}}\frac{dx_i}{\sqrt{2\pi{\sigma}_{n}^2}}
\;\;,$$ where the $x_{n}$-integrals run from $-\infty$ to $\infty$. So the square averaged over the sequence ${({u}_{n})_{n=1}^{{M}}}$ and weighted with ${({\sigma}_{n}^2)_{n=1}^{{M}}}$ is equal to the square averaged over the class of functions that can be written as linear combination of ${({u}_{n})_{n=1}^{{M}}}$ measured with Gaussian weights with widths ${({\sigma}_{n})_{n=1}^{{M}}}$. In the formulation of the statistic in terms of the two-point function this means that ${({u}_{n},{\sigma}_{n}^2)_{n=1}^{{M}}}$ gives its spectral decomposition: $${c}({\omega}_{1},{\omega}_{2})
= \sum_{n=1}^{{M}}{\sigma}_{n}^2{u}_{n}({\omega}_{1}){u}_{n}({\omega}_{2})
\;\;.
\label{spectral}$$ The sequence ${({u}_{n})_{n=1}^{{M}}}$ usually consists of an orthonormal basis, and several examples of decompositions ${({u}_{n},{\sigma}_{n}^2)_{n=1}^{{M}}}$ can be found in [@vanHameren2001], including cases with ${M}=\infty$. One can also find the famous $\chi^2$-statistic interpreted in this way there, with ${({u}_{n})_{n=1}^{{M}}}$ a set of indicator functions of non-overlapping subsets of ${\Omega}$, and ${\sigma}_{n}^2=1/{\langle{u}_{n}\rangle}$.
A closer look at formula (\[defgof1\]) for the [GOF]{} statistic reveals that it is highly impractical for the estimation of the [TD]{} with the Monte Carlo method, firstly because it is quadratic in the number of data-points and secondly because a ${\mathit{dim}}$-dimensional integral has to be calculated for each data sample.[^3] One such integral evaluation can be performed within acceptable time-scale using Monte Carlo integration techniques, by generating integration-points ${\omega}$ distributed following ${P}$ and calculating the average of $\sum_{i=1}^{{N}}{Q}{c}({\omega}_{i},{\omega})$. In order to make an estimate of the [TD]{} with a histogram, however, one would have to calculate in the order of a thousand of such integrals.
Fortunately, the precise definition of the statistic, or more explicitly the spectral decomposition of the two-point function, can be chosen such that the asymptotic [TD]{} ${\mathcal{P}}_{\infty}$ becomes Gaussian for ${\mathit{dim}}>\infty$, as we will see in the next section. This indicates that, for large ${\mathit{dim}}$, ${\mathcal{P}}_{\infty}$ only depends on the expectation value and the variance of the statistic. In [section \[examples\]]{}, we will see an explicit example for which ${P}$ influences ${\mathcal{P}}_{\infty}$ as good as only through the expectation value and the variance for any ${\mathit{dim}}>1$, even before ${\mathcal{P}}_{\infty}$ looks like a Gaussian. So instead of thousands of ${\mathit{dim}}$-dimensional integrals for a histogram, one only has to calculate a ${\mathit{dim}}$, two $2{\mathit{dim}}$, a $3{\mathit{dim}}$ and a $4{\mathit{dim}}$-dimensional integral for the expectation value (\[expectationvalue\]) and the variance (\[variance\]).
Calculation of the asymptotic test-distribution
===============================================
We approach the calculation of ${\mathcal{P}}_{{N}}$ through its moment generating function $${G}_{{N}}(z) = {E}(\,e^{z{T}_{{N}}}\,)
\;\;.$$ ${\mathcal{P}}_{{N}}$ can be recovered form ${G}_{{N}}$ by the inverse Laplace transformation $${\mathcal{P}}_{{N}}(t)
= \int_{\Gamma}\frac{dz}{2\pi{\mathrm{i}}}
\,\exp(\,{S}(t;z)\,)
\quad,\quad
{S}(t;z) = \log{G}_{{N}}(z) - tz
\;\;,
\label{Laplace}$$ where $\Gamma$ runs from $-{\mathrm{i}}\infty$ to ${\mathrm{i}}\infty$ on the left side of any singularity of ${G}_{{N}}$. The analysis of ${G}_{{N}}$ can be simplified by the observation that the statistic (\[defgof\]) does not change if we replace $${u}_{n} \leftarrow {u}_{n}
- \frac{1}{{Q}}{\langle{u}_{n}{Q}{P}\rangle}$$ in the spectral decomposition, since ${L}({u}_{n})={\langle{u}_{n}{Q}\rangle_{{N}}}-{\langle{u}_{n}{Q}{P}\rangle}$ is invariant (remember that ${\langle{P}\rangle}=1$). In other words, (\[defgof\]) with ${\mu}$ Gaussian and two-point function (\[spectral\]) is equivalent to $${T}_{{N}}
= {N}\int_{{\mathcal{H}}}{\langle{f}{Q}\rangle_{{N}}}^2\,{\mu}(d{f})
= \frac{1}{{N}}\sum_{i,j=1}^{{N}}{Q}({\omega}_{i}){Q}({\omega}_{j})
{c}({\omega}_{i},{\omega}_{j})
\;\;,
\label{defgof2}$$ with ${\mu}$ Gaussian and two-point function $${c}({\omega}_{1},{\omega}_{2})
= \sum_{n=1}^{{M}}{\sigma}_{n}^2
\bigg({u}_{n}({\omega}_{1})
- \frac{{\langle{u}_{n}{Q}{P}\rangle}}{{Q}({\omega}_{1})}\bigg)
\bigg({u}_{n}({\omega}_{2})
- \frac{{\langle{u}_{n}{Q}{P}\rangle}}{{Q}({\omega}_{2})}\bigg)
\;\;.
\label{twopoint2}$$ With this decomposition, we can put ${\langle{f}{Q}{P}\rangle}$ equal to zero under the measure. We continue in the spirit of [@vanHamerenKleiss; @vanHameren2001], and write $${T}_{{N}} = \int_{{\Omega}}\!\int_{{\Omega}}
{c}({\omega},\eta)
{\delta}_{{N}}({\omega}){\delta}_{{N}}(\eta)\,d{\omega}d\eta
\;\;,$$ where $${\delta}_{{N}}({\omega})
= \frac{{Q}({\omega})}{\sqrt{{N}}}\sum_{i=1}^{{N}}{\delta}({\omega}_{i}-{\omega})
\;\;,$$ so that, using Gaussian integration rules, we find that $$e^{z{T}_{{N}}}
=\int_{{\mathcal{H}}}e^{\sqrt{2z}\,{\langle{f}{\delta}_{{N}}\rangle}}
\,{\mu}(d{f})
=\int_{{\mathcal{H}}}\bigg(\;\prod_{i=1}^{{N}}
e^{\sqrt{2z/{N}}\,{f}({\omega}_{i}){Q}({\omega}_{i})}
\;\bigg){\mu}(d{f})
\;\;,$$ and $${G}_{{N}}(z) = {E}(\,e^{z{T}_{{N}}}\,)
=\int_{{\mathcal{H}}}
{\langle\,{P}\,e^{\sqrt{2z/{N}}\,{f}{Q}}\,\rangle}^{{N}}
\,{\mu}(d{f})
\;\;.$$ We shall restrict ourselves to the asymptotic distribution for ${N}\to\infty$ from now on. We find $${G}_{\infty}(z)
=\lim_{{N}\to\infty}{G}_{{N}}(z)
=\int_{{\mathcal{H}}}
e^{z{\langle{f}^2{Q}^2{P}\rangle}}\,{\mu}(d{f})
\;\;,$$ where we used the fact that that ${\langle{f}{Q}{P}\rangle}$ can be taken equal to zero under the measure. Substituting $${f}({\omega})
= \sum_{n=1}^{{M}}x_{n}\bigg({u}_{n}({\omega})
- \frac{{\langle{u}_{n}{Q}{P}\rangle}}{{Q}({\omega})}\bigg)
\quad\textrm{and}\quad
{\mu}(d{f})
= \prod_{n=1}^{{M}}e^{-\frac{x^2}{2{\sigma}_{n}^2}}
\frac{dx_{n}}{\sqrt{2\pi{\sigma}_{n}^2}}
\;\;$$ and applying well known Gaussian integration rules, we find $${G}_{\infty}(z)
= \det(1-2z{A})^{-1/2}
\;\;,
\label{Geninfty}$$ with $${A}_{n,m}
= {\sigma}_{n}{\sigma}_{m}{\langle{u}_{n}{u}_{m}{Q}^2{P}\rangle}
-{\sigma}_{n}{\langle{u}_{n}{Q}{P}\rangle}
{\sigma}_{m}{\langle{u}_{m}{Q}{P}\rangle}
\;\;.$$ The asymptotic generating function is now determined up to the positions of its singularities, which can directly be written in terms of the eigenvalues ${(\lambda_{n})_{n=1}^{{M}}}$ of ${A}$, since $${G}_{\infty}(z)
= \bigg(\;\prod_{n=1}^{{M}}(1-2z\lambda_{n})\;\bigg)^{-1/2}
\;\;.
\label{Geninfty1}$$ Another way to see how the eigenvalues affect the shape of the [TD]{} is by considering the cumulants, which are generated by the logarithm of the generating function: $$\frac{d^k\log{G}_{\infty}}{dz^k}(z=0)
= 2^{k-1}(k-1)!\sum_{n=1}^{{M}}\lambda_{n}^k
\;\;.$$ If ${A}$ would consist only of a diagonal term plus a diadic term, then the access to its eigenvalues would be relatively easy. Having in mind that the functions ${u}_{n}$ are orthonormal, this can be achieved by the choice $${Q}= 1/\sqrt{{P}}
\;\;,$$ so that $${A}_{n,m}
= {\sigma}_{n}^2\delta_{n,m}
-{\sigma}_{n}{\langle{u}_{n}\sqrt{{P}}\,\rangle}
{\sigma}_{m}{\langle{u}_{m}\sqrt{{P}}\,\rangle}
\;\;.
\label{Amat}$$
Gaussian limits
---------------
Without loss of generality, we may assume that the weights ${\sigma}_{n}$ are ordered from large to small. Then, it is not difficult to see [@vanHameren2001] that the eigenvalues ${(\lambda_{n})_{n=1}^{{M}}}$ of the matrix (\[Amat\]) satisfy $${\sigma}_{1}\geq\lambda_{1}\geq{\sigma}_{2}\geq\lambda_{2}\geq
{\sigma}_{3}\geq\lambda_{3}\geq\cdots
\geq{\sigma}_{{M}-1}\geq\lambda_{{M}-1}
\geq{\sigma}_{{M}}\geq\lambda_{{M}}
\;\;.
\label{ordering}$$ It is important to realize that (\[ordering\]) holds whatever ${P}$ is. The influence ${P}$ may have on the shape of ${\mathcal{P}}_{\infty}$ is restricted to the freedom each of the eigenvalues $\lambda_{n}$ has to change value between ${\sigma}_{n}$ and ${\sigma}_{n+1}$. The smallest eigenvalue is non-negative since the matrix ${A}$ is positive: for any vector $x$ we have $$\begin{aligned}
\sum_{n,m=1}^{{M}}{A}_{n,m}x_{n}x_{m}
&=& \sum_{n=1}^{{M}}{\sigma}_{n}^2x_{n}^2
-\bigg(\,\sum_{n=1}^{{M}}{\sigma}_{n}
{\langle{u}_{n}\sqrt{{P}}\,\rangle}x_{n}\,\bigg)^2
\nonumber\\
&\geq& \sum_{n=1}^{{M}}{\sigma}_{n}^2x_{n}^2
-\bigg(\,\sum_{n=1}^{{M}}{\sigma}_{n}^2x_{n}^2\,\bigg)
\bigg(\,\sum_{n=1}^{{M}}{\langle{u}_{n}\sqrt{{P}}\,\rangle}^2\,\bigg)
\geq 0
\;\;,
\nonumber\end{aligned}$$ where the first inequality is by Schwarz, and the second one is based on the assumption that ${({u}_{n})_{n=1}^{{M}}}$ is an orthonormal (but not necessarily complete) set and ${\langle{P}\rangle} = 1$.
For the case that ${P}=1$, it has been shown in [@vanHamerenKleissHoogland] that ${\mathcal{P}}_{\infty}$ becomes Gaussian if and only if there is a limit for the statistic such that $$\frac{\lambda_{1}^2}{\sum_{n=1}^{{M}}\lambda_{n}^2} \to 0
\;\;.
\label{Gaussianlimit}$$ Typically, this limit may be ${\mathit{dim}}\to\infty$, as is shown in various examples. For simplicity, we assume that ${\sigma}_{1}={\sigma}_{2}$, which is actually the case in most examples in [@vanHamerenKleissHoogland]. Using this and (\[ordering\]), it is easy to see that, if (\[Gaussianlimit\]) holds, then also $\lambda_{1}^2/(\sum_{n=1}^{{M}}{\sigma}_{n}^2)\to 0$ and $\lambda_{1}^2/(\sum_{n=2}^{{M}}{\sigma}_{n}^2)\to 0$, and that the limit holds for any ${P}$. So we may conclude that whenever the spectral decomposition is chosen such that ${\sigma}_{1}={\sigma}_{2}$ and there is a limit such that $$\frac{{\sigma}_{1}^2}{\sum_{n=1}^{{M}}{\sigma}_{n}^2} \to 0
\;\;,$$ then ${\mathcal{P}}_{\infty}$ becomes Gaussian in this limit.
Example\[examples\]
===================
The following example of a [GOF]{} statistic in many dimensions is based on the diaphony [@HellakalekNiederreiter; @vanHameren2001], and has the following spectral decomposition. The basis is the Fourier basis in ${\mathit{dim}}$ dimensions: $${u}_{\vec{n}}({\omega}) = \prod_{k=1}^{{\mathit{dim}}}{u}_{n_k}(\,X_{k}({\omega})\,)
\quad,\quad
n_{k} = 0,1,2,\ldots
\quad,\quad
k=1,2,\ldots,{\mathit{dim}}\;\;,$$ with $${u}_{0}(x) = 1
\quad,\quad
{u}_{2n-1}(x) = \sqrt{2}\sin(2\pi nx)
\quad,\quad
{u}_{2n}(x) = \sqrt{2}\cos(2\pi nx)
\;\;,$$ for $n$ from $1$ to $\infty$. The corresponding weights are given by $${\sigma}_{\vec{n}} = \prod_{k=1}^{{\mathit{dim}}}{\sigma}_{n_k}
\qquad\textrm{with}\qquad
{\sigma}_{0} = 1
\quad,\quad
{\sigma}_{2n-1} = {\sigma}_{2n} = \frac{1}{n}
\;\;.$$ The two-point function is equal to $${c}({\omega}_{1},{\omega}_{2})
= \sum_{\vec{n}}
{\sigma}_{\vec{n}}^2{u}_{\vec{n}}({\omega}_{1}){u}_{\vec{n}}({\omega}_{2})
= \prod_{k=1}^{{\mathit{dim}}}{c}_{1}(\,X_k({\omega}_{1})-X_k({\omega}_{2})\,)
\;\;,$$ where $\sum_{\vec{n}}=\sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}
\cdots\sum_{n_{{\mathit{dim}}}=0}^{\infty}$ and $${c}_{1}(x)
= 1+\frac{\pi^2}{3}-2\pi^2(x\,\mod\,1)(1-x\,\mod\,1)
\;\;.$$ The only important difference with the two-point function of the diaphony is that there the constant mode, the ${\mathit{dim}}$-dimensional basis function which is equal to $1$, is missing. This makes sense since the diaphony is constructed in order to test the uniform distribution and the contribution of the constant mode cancels in (\[defDisc\]). The advantage is that the diaphony is directly given by the sum of all two-point correlations between the data-points and no integrals of two-point functions have to be calculated. Notice that this cancellation also appears in (\[Amat\]): the first row and column of the matrix ${A}$ consist of only zeros if ${P}=1$, since all modes except the constant mode have zero integral. For a general [PDF]{} these cancellations also exist, but not for a single mode, and hence are not of practical use. For example, ${f}=1/{Q}$ cancels in (\[defgof\]).
It is useful to introduce the function ${\rho}$ which counts the number of weights with the same value: $${\rho}(s)
= \sum_{\vec{n}}\delta_{s,1/{\sigma}_{\vec{n}}}
\;\;.$$ The numbers ${\rho}(s)$ increase as function of ${\mathit{dim}}$. Using ${\rho}$, (\[Geninfty1\]) and (\[ordering\]), the generating function can be written as $${G}_{\infty}(z)
= \bigg(\;\prod_{s=1}^{\infty}(1-2z/s^2)^{{\rho}(s)-1}
(1-2z\lambda_{s})\;\bigg)^{-1/2}
\;\;,$$ where the numbers $\lambda_{s}$ depend on the [PDF]{} under consideration, but are, following (\[ordering\]), restricted by the relation $$1/s^2>\lambda_{s}\geq 1/(s+1)^2
\;\;.$$ In order to find the probability density ${\mathcal{P}}_{\infty}$, the inverse Laplace transformation (\[Laplace\]) has to be performed on ${G}_{\infty}$. The logarithm of the product can best be evaluated as described in [@JamesHooglandKleiss], by extracting the first and the second order terms in $z$: $$\log{G}_{\infty}(z)
= {E}z + {{\textstyle\frac{1}{2}}}{V}z^2
+ \sum_{s=1}^{\infty}(\,g_{s}(z)-g_{s}'(0)z-{{\textstyle\frac{1}{2}}}g_{s}''(0)z^2\,)
\;\;,
\label{logGen}$$ where $g_{s}(z)=-{{\textstyle\frac{{\rho}(s)-1}{2}}}\log(1-2z/s^2)
-{{\textstyle\frac{1}{2}}}\log(1-2z\lambda_{s})$, and ${E}$ and ${V}$ are the expectation value and the variance of the statistic. For the case that ${P}=1$, so that $\lambda_{s}=1/(s+1)^2$, they can be calculated directly and are given by $${E}_{\mathrm{uniform}} = \bigg(1+\frac{\pi^2}{3}\,\bigg)^{{\mathit{dim}}}-1
\quad,\quad
{V}_{\mathrm{uniform}} = 2\bigg(1 + \frac{\pi^4}{45}\,\bigg)^{{\mathit{dim}}}-2
\;\;.$$ We want to study the influence of ${P}$ on ${\mathcal{P}}_{\infty}$ by generating the eigenvalues $\lambda_{s}$ at random, uniformly distributed within their borders, and plotting the result. First, however, we need to find out how many terms in the infinite sum of (\[logGen\]) have to be taken into account in order to obtain a trustworthy result. This can be done at ${\mathit{dim}}=1$, since we know already that ${\mathcal{P}}_{\infty}$ will tend to look like a Gaussian for larger values of ${\mathit{dim}}$ so that the sum must become less important. Furthermore, there is the advantage that at ${\mathit{dim}}=1$ and ${P}=1$ there exists a simple formula for the generating function: $${G}_{\infty}^{\mathrm{uniform}}(z)
= \frac{\sqrt{2\pi^2 z}}{\sin\sqrt{2\pi^2 z}}
\;\;.
\label{Gen1dim}$$ In [Figure \[fig01\]]{}, we present the result with this formula and with (\[logGen\]) using only one term. With $10$ terms, the difference between the curves is invisible and this is the number we further use.
Results for ${\mathit{dim}}=2$ are depicted in [Figure \[fig02\]]{}. As expected, the curves look more ‘Gaussian’ than the $1$-dimensional curve. The crosses represent the case ${P}=1$, and the two continuous curves represent two cases with ‘typpical’ sets of random eigenvalues. The curves are clearly different, but if we go over to standardized variables, that is, if we plot $$\sqrt{{V}}\,{\mathcal{P}}_{\infty}(\sqrt{{V}}\,t+{E})
\;\;,$$ so that the expectation value is equal to $0$ and the variance is equal to $1$, we find [Figure \[fig03\]]{}, and we may conclude that the curves almost only depend on the expectation value and the variance. Again, we know that this behavior becomes only stronger for higher values of ${\mathit{dim}}$ because of the Gaussian limit.
We conclude that ${P}_{\infty}$ for general ${P}$ can, to satisfying accuracy, be approximated by $$\sqrt{{V}_{\mathrm{uniform}}/{V}}
\;{\mathcal{P}}_{\infty}^{\mathrm{uniform}}\Big(
\sqrt{{V}_{\mathrm{uniform}}/{V}}\,(t-{E})+{E}_{\mathrm{uniform}}\Big)
\;\;,$$ where ${\mathcal{P}}_{\infty}^{\mathrm{uniform}}$, ${E}_{\mathrm{uniform}}$ and ${V}_{\mathrm{uniform}}$ are the asymptotic test-distribution, the expectation value and the variance for the case that ${P}=1$.
Conclusion
==========
We have seen how to construct practical [GOF]{} statistics to test the hypothesis that a sample of data is distributed following a given [PDF]{} in many dimensions, for which the asymptotic test-distribution in the limit of an infinite sample size becomes Gaussian in the limit of an infinite number of dimensions. Furthermore, we have seen an explicit example of such a statistic, for which the asymptotic test-distribution depends on the [PDF]{} as good as only through the expectation value and the variance of the statistic for any number of dimensions larger than one.
### Acknowledgment {#acknowledgment .unnumbered}
This research has been supported by a Marie Curie Fellowship of the European Community program “Improving Human Research Potential and the Socio-economic Knowledge base” under contract number HPMD-CT-2001-00105, and Deutche Forschungsgemeinschaft through the Graduiertenkolleg ‘Eichtheorien’ at the Johannes-Gutenberg-Universität, Mainz.
B. Aslan and G. Zech, [*Comparison of Different Goodness-of-Fit Tests*]{}, Durham 2002, Advanced statistical techniques in particle physics 166-175 ([http://www.ippp.dur.ac.uk/Workshops/02/statistics/proceedings.shtml]{}). B. Aslan and G. Zech, [*A new class of binning free, multivariate goodness of fit tests: the energy tests*]{} ([http://arxiv.org/abs/hep-ex/0203010]{}). R. Raja, [*A Measure of the Goodness of Fit in Unbinned Likelihood Fits; End of Bayesianism*]{}, eConf C030908:MOcT003, 2003 ([http://arxiv.org/abs/physics/0401133]{}). K.J. Abraham, [*A New Technique for Sampling Multi-Modal Distributions*]{} ([http://arxiv.org/abs/physics/9903044]{}). R.F. Tichy and M. Drmota, [*Sequences, Discrepancies and Applications*]{} (Springer, 1997). H. Niederreiter, [*Random number generations and Quasi-Monte Carlo methods*]{} (SIAM 1992). ([http://www.mcqmc.org]{}). A. van Hameren, R. Kleiss and J. Hoogland, [*Gaussian limits for discrepancies: I. Asymptotic results*]{}, [Comp. Phys. Comm. [**107**]{} (1997) 1-20]{} ([http://arxiv.org/abs/physics/9708014]{}). A. van Hameren and R. Kleiss, [*Quantum field theory for discrepancies*]{}, [Nucl. Phys. [**B529**]{} \[PM\] (1998) 737-762]{} ([http://arxiv.org/abs/math-ph/9805008]{}). P. Hellakalek and H. Niederreiter, [*The weighted spectral test: Diaphony*]{}, ACM Trans. Model. Comput. Simul. 8, No. 1 (1998), 43-60. A. van Hameren, [*Loaded Dice in Monte Carlo: Importance sampling in phase space integration and probability distributions for discrepancies*]{}, PhD-thesis (Nijmegen, 2001) ([http://arxiv.org/abs/hep-ph/0101094]{}). F. James, J. Hoogland and R. Kleiss, [*Multidimensional sampling for integration and simulation: measures, discrepancies and quasi-random numbers*]{}, [Comp. Phys. Comm. [**99**]{} (1997) 180-220]{} ([http://arxiv.org/abs/physics/9606309]{}).
[^1]: That is, for [*binning free*]{} tests, which we are considering.
[^2]: If ${E}({T}_{{N}})\neq{t_{\infty}}$ then the statistic is [*biased*]{}.
[^3]: The $2{\mathit{dim}}$-dimensional integral does not depend on the data sample, and has to be calculated only once.
|
---
abstract: 'The electronic transport in a system of two quantum rings side-coupled to a quantum wire is studied via a single-band tunneling tight-binding Hamiltonian. We derived analytical expressions for the conductance and spin polarization when the rings are threaded by magnetic fluxes with Rashba spin-orbit interaction. We show that by using the Fano and Dicke effects this system can be used as an efficient spin-filter even for small spin orbit interaction and small values of magnetic flux. We compare the spin-dependent polarization of this design and the polarization obtained with one ring side coupled to a quantum ring. As a main result, we find better spin polarization capabilities as compared to the one ring design'
author:
- 'V.M. Apel'
- 'P. A. Orellana'
- 'M. Pacheco'
title: 'Fano and Dicke effects and spin polarization in a double Rashba-ring system side coupled to a quantum wire'
---
Introduction
============
Electronic transport through quantum rings structures has become the subject of active research during the last years. Interesting quantum interference phenomena have been predicted and measured in these mesoscopic systems in presence of a magnetic flux, such as the Aharonov-Bohm oscillations in the conductance, persistent currents[@Chandra; @Mailly; @Keyser] and Fano antiresonances [@damato; @pedro].
Recently, there has been much interest in understanding the manner in which the unique properties of nanostructures may be exploited in spintronic devices, which utilize the spin degree of freedom of the electron as the basis of their operation [@Datta; @Song; @Folk; @Mireles; @Mireles2; @Berciu]. A natural feature of these devices is the direct connection between their conductance and their quantum-mechanical transmission properties, which may allow their use as an all-electrical means for generating and detecting spin polarized distributions of carriers. For instance, recently Son et al.[Song]{} described how a spin filter may be realized in open-quantum dot system, by exploiting the Fano resonances that occur in their transmission. In a quantum dot in which the spin degeneracy of carrier is lifted, they showed that the Fano effect may be used as an effective means to generate spin polarization of transmitted carriers, and that electrical detection of the resulting polarization should be possible. This idea was extended to side attached quantum rings. In Ref.() Shelykh et. al. analyze the conductance of the Aharonov-Bohm (AB), one-dimensional quantum ring touching a quantum wire. They found that the period of the AB oscillations strongly depends on the chemical potential and the Rashba coupling parameter. The dependence of the conductance on the carrier’s energy reveals the Fano antiresonances. On the other hand, Bruder et. al.[@bruder] introduce a spin filter based on spin-resolved Fano resonances due to spin-split levels in a quantum ring side coupled to a quantum wire. Spin-orbit coupling inside the quantum ring, together with external magnetic fields, induces spin splitting, and the Fano resonances due to the spin-split levels result in perfect or considerable suppression of the transport of either spin direction. They found that the Coulomb interaction in the quantum ring enhances the spin-filter operation by widening the separation between dips in the conductance for different spins and by allowing perfect blocking for one spin direction and perfect transmission for the other.
![Schematic view of the two quantum ring attached to quantum wire.[]{data-label="fig1"}](fig1.eps){width="7cm"}
In this paper we study the two ring side-coupled to a quantum wire in presence of magnetic flux and Rashba spin-orbit interaction, as shown schematically in Fig. \[fig1\]. In a previous paper (ref ) we investigate the conductance and the persistent current of two mesoscopic quantum ring attached to a perfect quantum wire in presence of a magnetic field. We show that the system develops an oscillating band with resonances (perfect transmission) and antiresonances (perfect reflection). In addition, we found persistent current magnification in the rings due to the Dicke effect in the rings when the magnetic flux difference is an integer number of the quantum of flux. The Dicke effect in optics takes place in the spontaneous emission of a pair of atoms radiating a photon with a wave length much larger than the separation between them. [@dicke] The luminescence spectrum is characterized by a narrow and a broad peak, associated with long and short-lived states, respectively. Now, we show that by using the Fano and Dicke effects this system can be used as an efficient spin-filter even for small spin orbit interaction and small values of magnetic flux. We find that the spin-polarization dependence for this system is much more sensitive to magnetic flux and spin-orbit interaction than the case with only one ring side-coupled to the quantum wire.
Model
=====
In the presence of Rashba the spin-orbit coupling and magnetic flux $\Phi _{AB}$, the Hamiltonian for an isolated one-dimensional rings reads [@Meijer],
$$H=\hbar \Omega \left[ \left( -i\frac{\partial }{\partial \varphi }-\frac{%
\Phi _{AB}}{\Phi _{0}}+\frac{\omega _{so}}{\Omega }\sigma _{r}(\varphi
)\right) ^{2}-\frac{\omega _{so}^{2}}{4\Omega ^{2}}\right]$$
where, $$\sigma _{r}(\varphi )=\cos (\varphi )\sigma _{x}+\sin (\varphi )\sigma _{y}$$where $\sigma _{x}$ , $\sigma _{y}$ and $\sigma _{z}$ are the Pauli matrices. The parameter $\hbar \Omega =\frac{\hbar ^{2}}{2ma^{2}}$and $%
\omega _{so}=\frac{\alpha _{so}}{\hbar a}$ is the frequency associated to the SO coupling. The spin-orbit coupling constant $\alpha_{so}$ depends implicitly on the strength of the surface electric field [@luo]. The energy spectrum of the above Hamiltonian is given by,
$$\varepsilon _{\mu n}=\hbar \Omega \left[ \left( n-\phi _{AB}+\frac{1}{2}-\mu
\frac{1}{2\cos \theta }\right) ^{2}-\frac{1}{4}\tan ^{2}\theta \right]$$
where $\theta =-\arctan (\omega _{so}/\Omega )$ and $\phi _{AB}=$ $%
\frac{\Phi _{AB}}{\Phi _{0}}$, the Aharonov-Bohm phase.
The eigenstates are given by the following wave functions,
$$\Psi _{n}^{+}(\varphi )=e^{in\varphi }\left(
\begin{array}{c}
\cos (\frac{\theta }{2}) \\
e^{i\varphi }\sin (\frac{\theta }{2})%
\end{array}%
\right)$$
$$\Psi _{n}^{-}(\varphi )=e^{in\varphi }\left(
\begin{array}{c}
\sin (\frac{\theta }{2}) \\
-e^{i\varphi }\cos (\frac{\theta }{2})%
\end{array}%
\right)$$
The second quantization form of the quantum wire-quantum ring device with a magnetic flux and spin-orbit interaction can be written as,
$$H_{T}=\sum_{i\mu }\varepsilon _{i}c_{\mu ,i}^{\dag }c_{\mu
,i}+v\sum_{\left\langle ij\right\rangle \mu }\left( c_{\mu
,i}^{\dag }c_{\mu ,j}+h.c\right) +\sum_{\alpha ,n,\mu }\varepsilon
_{\mu ,n}^{\alpha }d_{\mu ,n}^{\alpha \dag }d_{\mu ,n}^{\alpha
}+V_{0}\sum_{\mu ,n,\alpha }(d_{\mu ,n}^{\alpha \dag }c_{\mu
0}+h.c)$$
The operator $c_{j\mu }^{\dag }$ creates an electron in the site $j$ of the wire and with spin index $\mu $, $d_{n\mu }^{\alpha \dag }$ creates an electron in the level $n$ of the ring $\alpha $ and with spin index $\mu$. The wire site-energy is assumed equal to zero and the hopping energies for wire and rings are taken to be equal to $v$, whereas $V_{0}$ couples both systems.
Within the described model the conductance can be calculated by means of a Dyson equation for the Green’s function.
$$G_{\mu 0}^{\alpha }=\frac{i}{2v\sqrt{1-\frac{\omega ^{2}}{4v^{2}}}}\frac{1}{%
1-i\gamma \sum\limits_{\beta }A_{\mu }^{\beta }(\omega )}$$
where $\gamma =\frac{V_{0}^{2}}{2v\sqrt{1-\frac{\omega ^{2}}{4v^{2}}}}$ and
$$A_{\mu }^{\alpha }(\omega )=\sum_{n=-\infty }^{\infty }g_{n\mu }^{\alpha
}=\sum_{n=-\infty }^{\infty }\frac{1}{\omega -\varepsilon _{\mu n}^{\alpha }}$$
and, $$g_{n\mu }^{\alpha }=\frac{1}{\omega -\varepsilon _{\mu n}^{\alpha }}.$$
Where $g_{n\mu }^{\alpha}$ is the Green’s function of the isolated ring $\alpha$.
The conductance of the system can be calculated using the Landauer formula.
$$\mathcal{G}_{\mu }=\frac{e^{2}}{h}T_{\mu }\left( \omega =E_{F}\right)$$
where $T_{\mu }$ is the probability transmission. In the linear response approach it can be written in term of the Green’s function of the contact by:
$$T_{\mu }\left( \omega \right) =\Gamma \left( \omega \right) \Im m \left[
G_{\mu 0}^{\alpha }\left( \omega \right) \right] =\frac{1}{1+\gamma ^{2}%
\left[ \sum\limits_{\beta }A_{\mu }^{\beta }(\omega )\right] ^{2}},$$
where $\Gamma(\omega)=2v\sqrt{1-\frac{\omega^{2}}{4v^{2}}}$.
Following ref. we introduce the weighted spin polarization as
$$P_{\mu }=\frac{\left\vert T_{+}-T_{-}\right\vert }{\left\vert
T_{+}+T_{-}\right\vert }\,T_{\mu }\ ,\qquad \mu =\pm . \label{wsp}$$
Notice that this definition takes into account not only the relative fraction of one of the spins, but also the contribution of those spins to the electric current. In other words, we will require that not only the first term of the right-hand side of (\[wsp\]) to be of order of unity, but also the transmission probability $T_{\mu}$.
Results
=======
In what follow we present results for the conductance and spin polarization for a double ring system of radius $a=120nm$ , coupled each other through a quantum-wire. For this radius the energy $\hbar \Omega=40\mu eV$. We consider only energies near of the center of the band therefore we consider the tunneling coupling as a constant. Then we set the tunneling coupling $%
\gamma=16\mu eV$.
By using the results given is ref.\[\] $A_{\mu
}^{\beta }(\omega )$ can be evaluated analytically,
$$\begin{aligned}
A_{\mu }^{\alpha }(\omega ) &=&\frac{2\pi ^{2}}{\hbar \Omega z}\frac{\sin (z)%
}{\cos (2\pi \phi _{\mu }^{\alpha })-\cos (z)} \\
z &=&\pi \left( \frac{4\omega }{\hbar \Omega }+\frac{\omega _{so}^{2}}{%
\Omega ^{2}}\right) ^{1/2}\end{aligned}$$
where, $\phi _{\mu }^{\alpha }=\phi _{AB}^{\alpha }+\frac{1}{2}%
-\mu \frac{1}{2\cos \theta },$ is the net phase for the $\alpha $-ring. Then, we can obtain an analytical expression for the conductance,
$$\mathcal{G}_{\mu}(\omega)=\frac{e^{2}}{h}\frac{\left[ \left( \cos
(2\pi \phi _{\mu }^{u})-\cos
(z)\right) \left( \cos (2\pi \phi _{\mu }^{d})-\cos (z)\right) \right] ^{2}}{%
\left[ \left( \cos (2\pi \phi _{\mu }^{u})-\cos (z)\right) \left(
\cos (2\pi \phi _{\mu }^{d})-\cos (z)\right) \right] ^{2}+\beta
^{2}\left[ \cos (2\pi \phi _{\mu }^{u})+\cos (2\pi \phi _{\mu
}^{d})-2\cos (z)\right] ^{2}}.\label{conduc1}$$
with $\beta =\left( \gamma 2\pi ^{2}/\hbar \Omega \right) \left(
\sin z/z\right) $
An interesting situation appears when the energy spectrum of both rings becomes degenerated. This occurs when the magnetic fluxes threading the rings are equals ($\phi _{AB}^{u}=\phi _{AB}^{d}=\phi _{AB}$). For this case we obtain,
$$\mathcal{G}_{\mu }=\frac{e^{2}}{h}\frac{\left( \cos (2\pi \phi _{\mu })-\cos
(z)\right) ^{2}}{\left( \cos (2\pi \phi _{\mu })-\cos (z)\right) ^{2}+4\beta
^{2}}. \notag$$
The spin-dependent conductance vanishes when $\cos (2\pi \phi _{\mu })-\cos
(z)=0$, i.e, when $E_F =\varepsilon _{\mu }^{\alpha }$. The zeroes in the conductance (Fano antiresonances) represent exactly the superposition of the spectrum of isolated rings. In fact, the conductance can be written as superposition of symmetric Fano line-shapes
$$\mathcal{G}_{\mu }=\frac{e^{2}}{h}\frac{\left( \epsilon _{\mu }+q\right) ^{2}%
}{\epsilon _{\mu }^{2}+1}. \notag$$
where, $\epsilon _{\mu }=\left( \cos (2\pi \phi _{\mu
})-\cos (z)\right) /2\beta $ is the detuning parameter and $q$ is the Fano parameter, in this case $q=0$.
![Spin-dependent conductance(upper layer) and spin polarization (lower layer) as a function of Fermi energy,(color online, black line $%
\protect\mu=+$,red line ($\protect\mu=-$) ) for $\protect\alpha_{SO}=0.5\times 10^{-11}eVm$,$\protect\phi^{u}_{AB}=%
\protect\phi^{d}_{AB}=0.25$.[]{data-label="fig2"}](fig2.eps)
Figure 2 displays the spin-dependent linear conductance (upper layers) and spin polarization (lower layers) versus the Fermi energy for the symmetric case with $\phi _{AB}=0.25$ and a spin-orbit coupling $\alpha _{so}=0.5\times 10^{-11}eVm$. The energy spectrum consists of a superposition of quasi-bound states reminiscent of the corresponding localized spectrum of the isolated rings. As expected from the analytical expression (Eq. \[conduc1\]) the linear conductance displays a series of resonances and Fano antiresonances as a function of the Fermi energy. On the other hand, for given set of parameters the system shows zones of high polarization due to the splitting of the spin energy states.
![Spin-dependent conductance (upper layer) and spin polarization (lower layer) as a function of Fermi energy,(color online, black line $%
\protect\mu=+$,red line $\protect\mu=-$) for ,$\protect\alpha_{SO}=0.5\times
10^{-11}eVm$,$\protect\phi^{u}_{AB}=0.3$ and $\protect\phi^{d}_{AB}=0.2 $[]{data-label="fig3"}](fig3.eps)
Now we analyze the asymmetric case, i.e $\phi _{AB}^{u}\neq \phi
_{AB}^{d}$. Figure 3 displays the spin-dependent linear conductance (upper layers) and spin polarization (lower layers) versus the Fermi energy for a spin-orbit coupling $\alpha
_{so}=0.5\times 10^{-11}eVm$ and parameters of magnetic flux given by $\phi _{AB}^{u}=0.2,\phi _{AB}^{d}=0.3$. Newly, the zeroes in the conductance represent exactly the superposition of the spectrum of each isolated ring $\varepsilon _{\mu n}^{\alpha }$. In fact, now the conductance vanishes when, $\cos (2\pi \phi _{\mu
}^{u})-\cos (z)=0$ or $\cos (2\pi \phi _{\mu }^{d})-\cos (z)=0,$ i.e when $E_{F}=\varepsilon _{\mu }^{\alpha }$. Notice that due to the difference between both fluxes new resonances in the conductance appear. This also affects the structure of the polarization.
We note that when there is a magnetic flux difference $\delta \phi
_{AB}=\phi _{AB}^{u}-\phi _{AB}^{d}$ high spin polarization can obtain even for small values of the spin-orbit coupling. In fact, for small values spin-orbit coupling by adjusting the magnetic flux difference $\delta \phi _{AB}$ maxima of polarization are reached. We analyze in detail this situation. The maxima of the conductance are obtained when $\sin z=0$ or when $\left( \cos
(2\pi \phi _{\mu }^{u})+\cos (2\pi \phi _{\mu }^{d})-2\cos
(z)\right) =0.$ The first condition is spin-independent and it is not interesting in this case. The second condition is spin-dependent and for small magnetic flux difference can be written as, $\cos \left[ 2\pi \left( \frac{\phi _{\mu }^{u}+\phi
_{\mu }^{d}}{2}\right) \right] -\cos (z)\approx 0.$ This occurs for the energies given by $\widetilde{\varepsilon }_{\mu n}=\hbar \Omega %
\left[ \left( n-\widetilde{\phi }_{\mu }+\frac{1}{2}-\mu \frac{1}{2\cos
\theta }\right) ^{2}-\frac{1}{4}\tan ^{2}\theta \right] ,$where $\widetilde{%
\phi }_{\mu }=\frac{\phi _{\mu }^{u}+\phi _{\mu }^{d}}{2}$ i.e the position of the maxima of the conductance corresponding to the spectrum of an effective ring with phase $\widetilde{\phi }_{\mu
}.$ Therefore the condition for the maxima of polarization are given when the minima of the conductance for one spin-state coincide with the maxima of the conductance of the opposite spin (or viceverse), that is $\widetilde{\varepsilon }_{\mu
n}=\varepsilon _{\overline{\mu }n+1}^{\alpha },$ then $\delta \phi
_{AB}=\left( 1-\cos \theta \right) /\cos \theta \approx
\frac{1}{2}\left( \omega _{so}/\Omega \right) ^{2}$. Then, for a given spin-orbit coupling by adjusting the magnetic flux difference between the upper and lower rings, the maxima of the spin polarization are reached. Fig.\[fig4\] displays the spin dependent conductance (upper layer) and the spin polarization (lower layer) for $\widetilde{\phi }_{AB}=0.25$, $\alpha
_{so}=5\times 10^{-12}eVm$ and $\delta \phi _{AB}=0.004988$. The conductance shows broad and sharp peaks and the spin polarization shows a series peaks of maximum of polarization. Fig.\[fig5\] displays a zoom of the conductance (right panel) and the polarization (left panel) as a function of the Fermi energy. Clearly the sharp peaks and Fano antiresonances for the two spin states are shifted given origin to the peaks of maximum of polarization. For comparison we plot the corresponding conductance and polarization for one ring for the same values of the magnetic flux and spin orbit coupling (Fig. \[fig6\]). For these parameter the spin polarization of one ring is very low for both spin states. The inset in Fig.6 (lower panel) shows a zoom of the spin polarization.
![Spin-dependent conductance (upper layer)and spin polarization (lower layer) as a function of Fermi energy,(color online, black line $%
\protect\mu=+$,red line $\protect\mu=-$) for $\protect\alpha_{so}=0.5\times
10^{-12}eVm$,$\widetilde{\protect\phi}_{AB}=0.25$ and $\protect\delta
\protect\phi _{AB}=0.004988$.[]{data-label="fig4"}](fig4.eps)
![Spin-dependent conductance (left panel) and spin polarization (right panel) as a function of Fermi energy,(color online, black line $%
\protect\mu=+$,red line $\protect\mu=-$) for $\protect\alpha_{so}=0.5\times
10^{-12}eVm$,$\widetilde{\protect\phi}_{AB}=0.25$ and $\protect\delta
\protect\phi _{AB}=0.004988$.[]{data-label="fig5"}](fig5.eps)
![One ring spin-dependent conductance(upper layer) and spin polarization (lower layer) as a function of Fermi energy,(color online, black line $\protect\mu=+$,red line $\protect\mu=-$ ) for $\protect\alpha_{SO}=0.5\times 10^{-11}eVm$,$\protect\phi%
_{AB}=0.25$.[]{data-label="fig6"}](fig6.eps)
For small values of magnetic flux difference $\delta \phi _{AB}$ the conductance of the two ring system can be written approximately as a superposition of a broad Fano line shape and a narrow Breit-Wigner line shape. This is, $$\mathcal{G}_{\mu }\approx \frac{e^{2}}{h}\left[ \frac{\left( \epsilon _{\mu
}+q\right) ^{2}}{\epsilon _{\mu }^{2}+1}+\frac{\eta _{\mu }^{2}}{x_{\mu
}^{2}+\eta _{\mu }^{2}}\right] .$$where the width $\eta _{\mu }=\left( \sin 2\pi \widetilde{\phi }%
_{\mu }\sin 2\pi \delta \phi _{AB}\right) ^{2}/(2\gamma \beta )$ and $x_{\mu }=2\beta \epsilon _{\mu }$. As we discuss in a previous paper[@pedro2], this expression clearly shows the superposition of short and long living states developed in the rings. The apparition of quasi-bound states in the spectrum of the system is a consequence of the mixing of the levels of both rings which are coupled indirectly through the continuum of states in the wire. A similar effect was discussed recently in a system with a ring coupled to a reservoir by Wunsch et al. in ref.\[\]. They relate this kind of collective states with the Dicke effect in optics. The Dicke effect in optics takes place in the spontaneous emission of a pair of atoms radiating a photon with a wave length much larger than the separation between them. [@dicke] The luminescence spectrum is characterized by a narrow and a broad peak, associated with long and short-lived states, respectively. This feature allows to obtain high spin polarization even for small spin-orbit coupling by adjusting the magnetic flux difference $\delta \phi _{AB}$. High spin polarization holds even for small values of the magnetic flux. For instance the Fig. \[fig7\] displays the conductance and spin polarization as a function of the Fermi energy for $\widetilde{\phi }_{AB}=0.01$,$\alpha _{so}=5\times 10^{-12}eVm$) and $\delta\phi_{AB}=0.004988$. The spin-polarization shows sharp peaks for the two spin states. As a comparison with a single ring side-coupled to a quantum wire, the system composed by two rings allows us to obtain high spin polarization even for small spin-orbit interaction and small magnetic fluxes, keeping a small difference for these fluxes.
![Spin-dependent conductance (upper layer) and spin polarization (lower layer) as a function of Fermi energy,(color online, black line $%
\protect\mu=+$,red line $\protect\mu=-$ ) for $\protect\alpha_{SO}=0.5\times 10^{-12}eVm$,$\widetilde{\protect\phi}%
_{AB}=0.01$ and $\protect\delta\protect\phi_{AB}=0.004988$.[]{data-label="fig7"}](fig7.eps)
Summary
=======
We have investigated the spin dependent conductance and spin polarization in a system of two side quantum rings attached to a quantum wire in the presence of magnetic fluxes threading the rings and Rashba spin-orbit interaction. We show that by using the Fano and Dicke effects this system can be used as an efficient spin-filter. We compare the spin-dependent polarization of this design and the polarization obtained with one ring side coupled to a quantum ring. As a main result, we find better spin polarization capabilities as compared to the one ring design. We find that the spin-polarization dependence for this system is much more sensitive to magnetic flux and spin-orbit interaction than the case with only one ring side-coupled to the quantum wire. This behavior is interesting from theoretical point of view, but also by its potential technological application.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
P. A. O. and M. P. would like to thank financial support from CONICYT/Programa Bicentenario de Ciencia y Tecnologia (CENAVA, grant ACT27).
[99]{} V. Chandrasekhar, R.A. Webb, M.J. Brady, M.B. Ketchen, W.J. Gallagher and A. Kleinsasser, Phys. Rev. Lett. **67** 3578 (1991).
D. Mailly, C. Chapelier and A. Benoit, Phys. Rev Lett. **70** 2020 (1993).
U. F. Keyser, C. Fühner, S. Borck and R. J. Haug, Semm. Sc. and Tech. **17**, L22 (2002).
Jorge L. D’Amato, Horario M. Pastawski, and Juan F. Weisz, Phys. Rev. B textbf[39]{}, 3554 (1989).
P. A. Orellana, M. L. Ladrón de Guevara, M. Pacheco, and A. Latgé, Phys. Rev. B **68**, 195321 (2003).
Datta S and Das B Appl. Phys. Lett. **56** 665 (1990)
Song J F, Ochiai Y and Bird J P Appl. Phys. Lett. **82** 4561 (2003)
Folk J A, R. M. Potok, Marcus C M and Umansky V *Science* **299** 679 (2003)
Mireles F and Kirczenow G Phys. Rev. B **64** 024426 (2001)
Mireles F, Ulloa S E, Rojas F and Cota E Appl. Phys. Lett. **88** 093118 (2006)
Berciu M and Janko B Phys. Rev. Lett. *Lett.***90** 246804 (2003)
I. A. Shelykh, N. G. Galkin, and N. T. Bagraev Phys. Rev. B **74**, 165331 (2006).
Minchul Lee and C. Bruder Phys. Rev. B **73**, 085315 (2006).
P. A. Orellana, and M. Pacheco, Phys. Rev. B **71**, 235330 (2005).
R. H. Dicke, Phys. Rev. **89**, 472 (1953).
F. E. Meijer, A. F. Morpurgo, and T. M. Klapwijk, Phys. Rev. B 66, 033107 (2002).
J. Luo, H. Munekata, F. F. Fang and P. J. Stiles Phys. Rev. B **38**, 10142 - 10145 (1988)
B. Wunsch, A. Chudnovskiy, Phys. Rev. B **68**, 245317 (2003).
|
---
author:
- 'Allan Jabri, Armand Joulin, and Laurens van der Maaten'
bibliography:
- 'egbib.bib'
title: Revisiting Visual Question Answering Baselines
---
|
---
abstract: 'An experimental success criterion for continuous-variable quantum teleportation and memories is to surpass a limit of the average fidelity achieved by the classical measure-and-prepare schemes with respect to a Gaussian distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of the state-channel duality and the partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.'
author:
- Ryo Namiki
date: 'April 6, 2011'
title: |
Simple proof of the quantum benchmark fidelity\
for continuous-variable quantum devices
---
In order to realize quantum information processing [@NC00], a central challenge is to establish reliable quantum channels to transmit and storage quantum states faithfully. For a given experimental implementation of a quantum channel, it is natural to ask whether or not its performance originates from quantum coherence. This question is vital to assert the success of an experimental quantum teleportation [@Benn93] since it transmits quantum states by consuming quantum entanglement and has to maintain better fidelity of transmission beyond the classical transmission without entanglement [@Pop94; @Mass95; @Horo99]. In the present, the framework to prove the effect of entanglement can be applied for a wide class of experiments including the processes of quantum memory [@RMP82] and quantum key distribution [@rmp-qkd]. Associated with the increase of activity in experimental researches, there has been a growing interest in producing more practical and accessible settings for the proof of entanglement [@Bra00; @Ham05; @namiki07; @Has10; @namiki08; @Rig06; @Takano08; @Has08; @Fuc03; @Cal09].
A central notion to demonstrate quantum advantage over the classical processes is to outperform all *classical measure-and-prepare* (MP) schemes [@Bra00; @Ham05; @namiki07; @Has10; @namiki08; @Cal09]. A classical MP scheme is an entanglement breaking (EB) channel which breaks possible entanglement shared between the system being subject to the process and any other system [@16]. If a process is incompatible with any EB channel, one can find an entangled state whose inseparability survives after the entangled subsystem is subject to the process. In this case we call the process is in *quantum domain*. A natural figure of merit to measure the performance of the process is an average of the fidelities between the ideal output (*target*) states and actual output states of the process over a set of input states with a certain prior probability distribution [@namiki07; @namiki08]. The classical limit of the average fidelity achieved by the classical MP schemes is called the *quantum benchmark* fidelity. Surpassing this fidelity limit is the proof of the entanglement and basic success criterion of the experiment for implementing quantum devices [@Furusawa98; @Jul04; @Lob09].
In quantum optics and continuous-variable quantum information processing [@CV-RMP], the coherent state is one of the most accessible quantum states, and it is natural to test the device by the input of coherent states. It is theoretically simple to determine the classical limit assuming the uniform distribution of coherent states, however, neither testing the input-output relation for every coherent state nor assuming the displacement covariant property for the real device is feasible. Hence, a Gaussian distribution has been employed to observe the performance on a flat distribution over a feasible amount of phase-space displacement [@Bra00; @Ham05]. The value of quantum benchmark fidelity with respect to the Gaussian distributed set of coherent states had been conjectured [@Bra00], and this conjecture was proven in [@Ham05]. After the rigorous proof [@Ham05], the classical limit fidelity for a class of non-unit-gain tasks is derived in order to deal with highly lossy processes, such as, a long distance transmission channel and a quantum memory process with a longer storage time [@namiki07]. The proof [@Ham05] has also been utilized in the problem of the non-locality without entanglement [@Nis07]. In view of these general importance, it would be insightful to find a different way to reach the fundamental benchmark.
In this report, we present an alternative proof of the quantum benchmark fidelity for continuous-variable quantum devices with respect to the transformation of Gaussian distributed set of coherent states. The proof is based on two well-established notions: the Choi-Jamiolkowski state-channel duality (see, e.g., [@Holevo10]) and the partial transpose [@Peres]. The state-channel duality is a standard tool to study the property of quantum channels whereas the partial transpose plays a central role in the theory of entanglement. Thanks to these reliable basics we can directly observe that the problem of the quantum benchmark is a type of separability problems on the quantum channel. We also apply the present method to give a quantum-domain criterion associated with a set of experimentally measured fidelities.
We use a standard notation to denote the coherent state with the complex amplitude $\alpha$ by ${\left | \alpha \right \rangle}$ and the number state with the photon number $n$ by ${\left | n \right \rangle}$. The coherent state is expanded in the number basis as ${\left | \alpha \right \rangle}= e^{-|\alpha |^2 /2} \sum_{n=0}^\infty \alpha ^n {\left | n \right \rangle} /\sqrt{n!}$. When we work on the state with two modes, we call the first system $A$ and the second system $B$.
Let us define the average fidelity of a physical process $\mathcal E$ for the transformation task on the coherent states $\{|\sqrt N \alpha\rangle \} \to \{{\left | \sqrt \eta \alpha \right \rangle}\} $ with $N, \eta > 0$ by $$\begin{aligned}
F_{N, \eta, \lambda} (\mathcal E) &:=& \int p_\lambda( \alpha ){ \left \langle \sqrt \eta \alpha \right |}\mathcal E \Big(|\sqrt N \alpha \rangle \langle \sqrt N \alpha| \Big) {\left | \sqrt\eta \alpha \right \rangle} d^2 \alpha \nonumber \\ \label{eq1} \end{aligned}$$ where the prior distribution of a symmetric Gaussian function with an inverse width of $\lambda >0 $ is given by $$\begin{aligned}
p_\lambda( \alpha ) := \frac{\lambda }{\pi} \exp (- \lambda |\alpha |^2 ).\label{eq2}\end{aligned}$$ It reproduces the uniform distribution in the limit $\lambda \to 0$. In the first proof [@Ham05], the unit-gain transformation $\{{\left | \alpha \right \rangle}\} \to \{{\left | \alpha \right \rangle}\}$ was considered so as to establish a benchmark for the channel that is expected to retrieve input states without disturbance, such as the action of ideal quantum teleportation and quantum memory. The factor $N$ was introduced to consider a type of state estimation from $N$-copies of the coherent states ${\left | \alpha \right \rangle} ^{\otimes N}$ in Ref. [@Nis07] while the factor $\eta$ was introduced to consider the effect of loss and amplification in Ref. [@namiki07].
*Quantum benchmark fidelity.—* The quantum benchmark fidelity for above transformation task is defined by the maximum of the fidelity in Eq. (\[eq1\]) with respect to the optimization of the quantum channel $\mathcal E$ over EB channels and shown to be [@Ham05; @namiki07; @Nis07] $$\begin{aligned}
\sup_{\mathcal E \in EB} F_{N,\eta, \lambda} (\mathcal E) = {\frac{ N + \lambda }{ N+ \lambda + \eta } }= : F_C (N,\eta, \lambda ) \label{main} $$ where $EB$ stands for the set of EB channels. Since we can verify the relation $F_{N, \eta, \lambda} = F_{\frac{N}{\eta },1, \frac{\lambda}{\eta }}, = F_{1, \frac{\eta }{N}, \frac{\lambda}{N} }$ from Eqs. (\[eq1\]) and (\[eq2\]), it is sufficient to show the relation of Eq. (\[main\]) either case of $\eta = 1$ [@Nis07] or case of $N= 1$ [@namiki07]. In the following we prove Eq. (\[main\]) with $\eta = 1 $. The central idea for the present proof is to make a connection between the fidelity and a two-mode squeezed state via a sort of the state-channel duality. Then, the problem turns out to be a problem to find the maximum expectation value of an observable without entanglement, which can be solved by using the notion of the partial transpose.
*Proof.—* Let us consider the following integration with the parameters $s, \kappa \ge 0 $, and $0\le \xi < 1 $, $$\begin{aligned}
J_{ \mathcal E}(s, \kappa ,\xi ) &:=& \int {d}^2 \alpha p_s(\alpha ) { \left \langle \alpha \right |}_A { \left \langle \kappa \alpha^* \right |}_B \mathcal\nonumber\\ & & \mathcal E_A \otimes I_B \left( {\left | \psi_\xi \right \rangle}{ \left \langle \psi_\xi \right |} \right ) {\left | \kappa \alpha^* \right \rangle}_B
{\left | \alpha \right \rangle}_A \label{start}$$ where ${\left | \psi_\xi \right \rangle}= \sqrt{1-\xi ^2} \sum_{n=0}^\infty \xi^n{\left | n \right \rangle}{\left | n \right \rangle}$ is the two-mode squeezed state and $I$ represents the identity process. Using the relation ${ \left \langle \alpha \right |}{\left | \psi_\xi \right \rangle} = \sqrt{1-\xi ^2} e^{-(1-\xi ^2 )|\alpha | ^2 /2} {\left | \xi \alpha ^* \right \rangle}$ we can verify the following identity: $$\begin{aligned}
J_{ \mathcal E}(s,\kappa ,\xi ) &=& \frac{s(1-\xi^2)}{\lambda} F_{N,1, \lambda} (\mathcal E ) \label{jj}$$ where the parameters are connected as $$\begin{aligned}
\lambda &=& s+ (1-\xi^2)\kappa ^2, \label{lam}\\
\sqrt N &=& \kappa \xi. \label{n}\end{aligned}$$ In order to find an upper bound of the fidelity we consider an upper bound of $J_{\mathcal E}$. If $\mathcal E$ is a MP scheme, $\rho_{\mathcal E} := \mathcal E \otimes I \left( {\left | \psi_\xi \right \rangle}{ \left \langle \psi_\xi \right |} \right )$ is a separable state [@16]. Then, there exists a separable state, say $\rho_{\mathcal E}= \rho_{\mathcal E}^\star$, corresponding to the optimal MP scheme that maximizes $ J_{\mathcal E}$, i.e., $J_{\mathcal E}( \rho_{\mathcal E}^\star )= \sup_{\mathcal E \in EB} J_{ \mathcal E}$. This implies that $J_{\mathcal E}( \rho_{\mathcal E}^\star )= \sup_{\mathcal E \in EB} J_{ \mathcal E}$ is bounded above by the maximum of $ J_{\mathcal E}( \rho_{\mathcal E} )$ when $ \rho_{\mathcal E}$ is optimized over the set of separable states, namely, the following inequality holds, $$\begin{aligned}
\sup_{\mathcal E \in EB} J_{ \mathcal E}(s,\kappa ,\xi) &\le& \max_{\rho \in Sep.} \tr \left[ \rho M
\right] , \nonumber $$ where $Sep.$ represents the set of separable states and $$\begin{aligned}
M:= \int p_s(\alpha) {\left | \alpha \right \rangle}{ \left \langle \alpha \right |} \otimes {\left | \kappa \alpha ^* \right \rangle}{ \left \langle \kappa \alpha ^* \right |} d^2\alpha . \nonumber\end{aligned}$$ Note that the maximum over separable states can be achieved by a product state and that the optimization over product states is equivalent to the optimization over their partial transpose. Hence, for any $\rho \in Sep.$, we can verify $ \tr [ \rho M ] \le \max_{\phi,\varphi }\tr M {{\left | \phi \right \rangle}{ \left \langle \phi \right |}} \otimes {{\left | \varphi \right \rangle}{ \left \langle \varphi \right |}} = \max_{\phi,\varphi }\tr M \Gamma [ {{\left | \phi \right \rangle}{ \left \langle \phi \right |}} \otimes {{\left | \varphi \right \rangle}{ \left \langle \varphi \right |}}] = \max_{\psi, \varphi} \tr \Gamma [M] {{\left | \phi \right \rangle}{ \left \langle \phi \right |}} \otimes {{\left | \varphi \right \rangle}{ \left \langle \varphi \right |}} $ where $\Gamma [ \cdot]$ denotes the partial transposition map. This implies $$\begin{aligned}
\sup_{\mathcal E \in EB} J_{ \mathcal E}(s,\kappa ,\xi ) &\le& \max_{\rho \in Sep.} \tr \rho \Gamma [M ] \le \| \Gamma [ M ] \| \label{ff} \end{aligned}$$ where the last inequality comes from the fact that the maximum over separable states is no larger than the maximum over all physical states and $\| \cdot \|:= \max_{{\langle u | u \rangle}=1} { \left \langle u \right |} \cdot {\left | u \right \rangle} $ denotes the maximum eigenvalue. Since the transpose of the coherent state with respect to the number basis acts as a phase conjugation, by taking the replacement ${\left | \kappa \alpha ^* \right \rangle}{ \left \langle \kappa \alpha ^* \right |} \to {\left | \kappa \alpha \right \rangle}{ \left \langle \kappa \alpha \right |} $ on $M$ we have $\Gamma [M ] = \int p_s(\alpha) {\left | \alpha \right \rangle}{ \left \langle \alpha \right |} \otimes {\left | \kappa \alpha \right \rangle}{ \left \langle \kappa \alpha \right |} d^2\alpha$. By using the beam-splitter transformation $\hat V | \sqrt{1 + \kappa ^2 } \alpha \rangle {\left | 0 \right \rangle} = {\left | \alpha \right \rangle}{\left | { \kappa } \alpha \right \rangle} $ we can write $$\begin{aligned}
\| \Gamma [M] \| &= & \| \hat V^\dagger \Gamma [M] \hat V \|\nonumber \\ & =& \left\| \int p_s( \alpha ) |\sqrt{1+\kappa ^2 }\alpha \rangle\langle \sqrt{1+\kappa ^2 } \alpha | \otimes |0\rangle \langle0 | d^2 \alpha \right\|\nonumber \\ &=& \left\| T \left( \frac{1+ \kappa^2}{s}\right ) \otimes {{\left | 0 \right \rangle}{ \left \langle 0 \right |}} \right\| = \frac{s }{s+1 +\kappa ^2} \label{ongm}
\end{aligned}$$ where $$\begin{aligned}
T( \bar n ) &:= & {\frac{1 }{ 1+ \bar n }} \sum_{n =0}^{\infty} \left( \frac{\bar n }{ 1+ \bar n } \right)^{n } |n \rangle \langle n | \nonumber $$ is the thermal state with the mean photon number $\bar n $. Equations (\[ff\]) and (\[ongm\]) lead to $$\begin{aligned}
\sup_{\mathcal E \in EB} J_{ \mathcal E}(s,\kappa ,\xi ) &\le & \frac{s }{s+1+ \kappa ^2 }. \nonumber $$ Using this relation and Eqs. (\[jj\]), (\[lam\]) and (\[n\]), we have $$\begin{aligned}
\sup_{\mathcal E \in EB} F_{N,1, \lambda } (\mathcal E ) &\le& \frac{\lambda}{ (1-\xi^2)} \frac{1}{N+ \lambda +1 } \label{sonamae} . $$ From the condition $s\ge 0 $ with Eqs. (\[lam\]) and (\[n\]), we have $$\begin{aligned}
\frac{\lambda}{1-\xi ^2} \le N+ \lambda . \label{scondition}\end{aligned}$$ From Eqs. (\[sonamae\]) and (\[scondition\]), we obtain the upper bound $$\begin{aligned}
\sup_{\mathcal E \in EB} F_{N,1, \lambda } (\mathcal E ) &\le& \frac{N+\lambda }{N+ \lambda + 1 } = F_C(N, 1, \lambda). $$ This bound can be achieved by the EB channel $\mathcal E_{EB} (\rho ):= \frac{1}{\pi} \int { \left \langle \alpha \right |}\rho {\left | \alpha \right \rangle} {\left | \frac{\sqrt{ N} \alpha }{N +\lambda} \right \rangle}{ \left \langle \frac{\sqrt{ N}\alpha }{N +\lambda} \right |} d^2 \alpha$. We thus have $\sup_{\mathcal E \in EB} F_{N,1, \lambda } (\mathcal E ) \ge F_{N,1,\lambda}(\mathcal E _{EB})= F_C(N, 1, \lambda)$. This concludes Eq. (\[main\]) with $\eta =1$.$\blacksquare$
It is well-known that the inseparability of two-mode Gaussian states can be characterized by using the standard form of the covariant matrices of the Gaussian states under the local Gaussian unitary operators [@Duan00]. Similarly, one-mode Gaussian channels can be described by a pair of $2$-by-$2$ matrices that determines the transformation of the covariant matrices and are classified into a few standard forms under the suitable unitary operations before-and-after the channel [@Hol08]. Two of the standard forms are relevant to quantum domain channels. In both forms one can find a proper set of the parameters $(N, \eta , \lambda)$ so that the classical limit fidelity is surpassed if the given channel is in quantum domain [@namiki07]. In this sense, the output-target fidelity with the Gaussian distributed set of coherent states is capable of detecting any one-mode Gaussian channels in quantum domain. In experiments we usually obtain a finite set of measured fidelities. The set of data is not enough to directly calculate the integration in Eq. (\[eq1\]), and $F (\mathcal E )$ is estimated by using additional assumptions. It is better if one can check a quantum domain criterion directly associated with the set of measured fidelities without additional assumptions. In the following we present a general theorem to produce a quantum domain criterion associated with a given set of measured fidelities. The proof of this theorem is essentially the same as above proof. It is remarkable that the criterion can be generated by a simple calculation of a maximum eigenvalue.
*In-situ generation of a quantum-domain condition.—* Let us write a set of input states $\{{\left | \psi_i \right \rangle} \}$, a set of target states $\{{\left | \psi_i ' \right \rangle} \} $, and a prior probability distribution $\{p_i\}$ with $\sum_i p_i=1 $. We can show that the following theorem holds: A process $\mathcal E$ is in quantum domain if $$\begin{aligned}
\bar F [\mathcal E ; p_i; \psi_i \to \psi_i ' ] > d \left\| \sum_i p_i {{\left | \psi_i ' \right \rangle}{ \left \langle \psi_i ' \right |}}
\otimes {{\left | \psi_i \right \rangle}{ \left \langle \psi_i \right |}} \right\| , \label{QDC}\end{aligned}$$ where the average fidelity is given by $$\begin{aligned}
\bar F [\mathcal E ; p_i; \psi_i \to \psi_i ' ] := \sum_i p_i { \left \langle \psi_i ' \right |} \mathcal E ({{\left | \psi_i \right \rangle}{ \left \langle \psi_i \right |}}) {\left | \psi_i ' \right \rangle} , \nonumber
\end{aligned}$$ and $d$ is the dimension of the Hilbert space spanned by the set of input states $\{{\left | \psi_i \right \rangle} \}$. Note that the experiment determines the set of the fidelities $ \{ { \left \langle \psi_i ' \right |} \mathcal E ({{\left | \psi_i \right \rangle}{ \left \langle \psi_i \right |}}) {\left | \psi_i ' \right \rangle} \} $ whereas the choice of the probability distribution $\{p_i\}$ is arbitrary.
*Proof.—* Let $\{{\left | u_k \right \rangle}\}_{k=0,1,2, \cdots , d-1 }$ be an orthonormal basis of the $d$-dimensional Hilbert space. We define the maximally entangled state of a two-$d$ level system by $ {\left | \Phi_d \right \rangle}:= \sum_{k=0}^{d-1} {\left | u_k \right \rangle}{\left | u_k \right \rangle} /\sqrt{d} $. We also define the complex conjugation of the $d$-dimensional state by ${\left | \psi^* \right \rangle}:= \sum_k {\langle \psi | u_k \rangle} {\left | {u_k} \right \rangle} = \sqrt d { \left \langle \psi \right |} {\left | \Phi_d \right \rangle} $. Then, we can write $$\begin{aligned}
&& \bar F [\mathcal E ; p_i; \psi_i \to \psi_i ' ] \nonumber \\
&=& d \sum_i p_i { \left \langle \psi_i ' \right |}_A { \left \langle {\psi_i ^*} \right |}_B \mathcal E_A \otimes I_B ({{\left | \Phi_d \right \rangle}{ \left \langle \Phi_d \right |}} ) {\left | \psi_i^* \right \rangle}_B {\left | \psi_i ' \right \rangle}_A \nonumber \\
&=& d \tr [ M \rho_{\mathcal E} ] \label{lllf} $$ where we write $M = \sum_i p_i ({{\left | \psi_i ' \right \rangle}{ \left \langle \psi_i ' \right |}})_A \otimes ( {{\left | {\psi_i^* } \right \rangle}{ \left \langle \psi_i^* \right |}})_B $ and $\rho_{\mathcal E} = \mathcal E_A \otimes I_B ({{\left | \Phi_d \right \rangle}{ \left \langle \Phi_d \right |}} )$. The state $\rho_{\mathcal E}$ is the standard Choi-Jamiolkowski isomorphism. In the continuous-variable case, we have used a two-mode squeezed state instead of an unnormalizable maximally entangled state [@Holevo10].
If the process $ \mathcal E$ is a MP scheme, $\rho_{\mathcal E} = \mathcal E_A \otimes I_B ({{\left | \Phi_d \right \rangle}{ \left \langle \Phi_d \right |}} )$ belongs to the set of separable states [@16]. Hence, the maximum of the average fidelity over all MP schemes is bounded above by the maximum of the final expression of Eq. (\[lllf\]) achieved by the optimization of the state $\rho_{\mathcal E}$ over separable states. This implies $$\begin{aligned}
\max_{\mathcal E \in EB} \bar F [\mathcal E ; p_i; \psi_i \to \psi_i ' ] \le d \max_{\rho \in Sep.} \tr [ M \rho ] . \label{17}\end{aligned}$$ Since the optimization over separable states can be converted into the optimization over their partial transpose, we have $$\begin{aligned}
\max_{\rho \in Sep.} \tr [ M \rho ] &=& \max_{\rho \in Sep.} \tr [ \Gamma [M ] \rho ] \label{18} $$ where $\Gamma$ stands for the partial transposition map again. Since the maximum over separable states is bounded above by the maximum over all physical states, we have $$\begin{aligned}
\max_{\rho \in Sep.} \tr [ \Gamma[ M ] \rho ] &\le& \max_{\rho } \tr [ \Gamma [M ] \rho ] = \| \Gamma [M ] \|. \label{19} $$ When we choose the partial transposition of the second system with respect to the basis $\{{\left | u_k \right \rangle}\} $, we have $$\begin{aligned}
\Gamma [M]= \sum_i p_i {{\left | \psi_i ' \right \rangle}{ \left \langle \psi_i ' \right |}}
\otimes {{\left | \psi_i \right \rangle}{ \left \langle \psi_i \right |}}. \label{GammaM}\end{aligned}$$ Concatenating Eqs. (\[17\])-(\[19\]) and (\[GammaM\]) we can see that the maximum fidelity over all MP schemes is bounded above by right hand side of Eq. (\[QDC\]). Hence, if a quantum channel provides the fidelity higher than this limit, it is incompatible with any classical MP scheme. $\blacksquare$
Consequently, if Ineqs. (\[17\]) and (\[19\]) are tight we can immediately obtain the classical limit just by the calculation of the maximal eigenvalue of the operator in right hand side of Eq. (\[QDC\]). This is the case for the following example.
*Example.—* Let us consider the uniform set of input states over the $d$-dimensional Hilbert space and transformation task of a unitary map by setting the target state $ {\left | \psi' \right \rangle} = U {\left | \psi \right \rangle}$ for any input ${\left | \psi \right \rangle}$. In this case it is well-known that the classical limit fidelity is given by [@Pop94; @Mass95; @Horo99; @Brass99; @Fuc03] $$\begin{aligned}
\bar F_c^{(d)}&:=& \max_{\mathcal E \in EB } \int d\psi { \left \langle \psi \right |} U^\dagger \mathcal E ( {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} ) U {\left | \psi \right \rangle} \nonumber \\
& =& \max_{\mathcal E \in EB } \int d\psi { \left \langle \psi \right |} \mathcal E ( {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} ) {\left | \psi \right \rangle} = \frac{2}{d+1}. \nonumber $$ where $\int d\psi$ denotes the Haar measure and the second equation comes from the fact that the total action of an EB channel followed by a unitary map can be described by a single EB channel. Hence, it is sufficient to consider the case that the task is the identity transformation, i.e., ${\left | \psi ' \right \rangle} ={\left | \psi \right \rangle}$. For the uniform ensemble of input states, the state of Eq. (\[GammaM\]) becomes the so-called Werner state [@wer89; @Voll01], and is decomposed into $\Gamma [M]= \int d\psi {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} \otimes {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} =( \openone + f )/[d(d+1)] $, where $f:= \sum_{i,j} {{\left | u_i \right \rangle}{ \left \langle u_j \right |}} \otimes {{\left | u_j \right \rangle}{ \left \langle u_i \right |}}$ is the flip operator. Hence, we have $\| \Gamma [M] \| =\frac{2}{d(d+1)} $, and obtain the inequality $\max_{\mathcal E \in EB}\bar F \le d \| \Gamma [M] \| = \frac{2}{d+1} =\bar F_c^{(d)}$ through Eqs. (\[17\]), (\[18\]), and (\[19\]). The inequality is saturated by the EB channel $\mathcal E _{EB}( \rho )= \sum_j U {{\left | u_j \right \rangle}{ \left \langle u_j \right |}} \rho {{\left | u_j \right \rangle}{ \left \langle u_j \right |}} U^\dagger $. This can be confirmed by the following equations: $ \bar F = \int d\psi { \left \langle \psi \right |} U^\dagger \mathcal E_{EB} ( {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} ) U {\left | \psi \right \rangle}= \tr [\sum_{j} {{\left | u_j \right \rangle}{ \left \langle u_j \right |}} \otimes {{\left | u_j \right \rangle}{ \left \langle u_j \right |}} ( \openone + d {{\left | \Phi_d \right \rangle}{ \left \langle \Phi_d \right |}} ) ]/[d(d+1)] = \frac{2}{d+1}$ where we used the relation $ \int d\psi {{\left | \psi \right \rangle}{ \left \langle \psi \right |}} \otimes {{\left | \psi^* \right \rangle}{ \left \langle \psi ^* \right |}} = (\openone + d {{\left | \Phi_d \right \rangle}{ \left \langle \Phi_d \right |}})/[d(d+1)] $ in the second line (see, e.g., [@Voll01]). Hence we obtain the tight classical limit. In the previous approaches [@Pop94; @Mass95; @Horo99; @Brass99; @Fuc03], the problem is treated as a type of state estimation in Refs. [@Pop94; @Mass95] and is also connected to a limit of optimal cloning in [@Brass99] whereas it is addressed as separability problems in Refs. [@Horo99; @Fuc03]. Our approach is somehow close to the approach of Ref. [@Horo99] in the sense that the maximally entangled state plays a central role.
In conclusion, we have presented an alternative proof of the quantum benchmark fidelity with respect to a Gaussian distributed set of coherent states. The main idea of proof is to use a sort of the state-channel duality to associate the average fidelity to the two-mode squeezed state. Then, the partial transpose is utilized to make the bound on the fidelity as a separability problem. Based on this method we have also presented a general theorem to produce a quantum-domain criterion associated with a set of measured fidelities. The theorem can be utilized in a wide class of experiments. The present method would be useful to further comprehend the property of quantum channels.
R.N. acknowledges support from JSPS.
M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information*, (Cambridge University Press, Cambridge, 2000). C. Bennett *et al.*, 70, 1895 (1993). S. Popescu, Phys. Rev. Lett. 72, 797 (1994). S. Massar and S. Popescu, Phys. Rev. Lett. 74, 1259 (1995). M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. A 60, 1888 (1999).
A.I. Lvovsky, B.C. Sanders, and W. Tittel, Nature Photonics 3, 706 (2009); K. Hammerer, A.S. Sorensen, and E.S. Polzik, Rev. Mod. Phys. [**82,**]{} 1041 (2010); N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, **74,** 145 (2002); V. Scarani *et al.,* 81, 1301, (2009).
S. L. Braunstein, C.A. Fuchs, and J. Kimble, J. Mod. Opt **47,** 267 (2000). K. Hammerer, M.M. Wolf, E.S. Polzik, and J.I. Cirac, **94,** 150503 (2005). R. Namiki, M. Koashi, and N. Imoto, **101,** 100502 (2008). R. Namiki, 78, 032333 (2008). H. Häseler and N. Lütkenhaus, 80, 042304 (2009); 81, 060306(R) (2010). J. Calsamiglia, M. Aspachs, R. Munoz-Tapia, and E. Bagan, Phys. Rev. A 79, 050301(R) (2009); M. Owari *et al.*, New J. Phys. 10, 113014 (2008). H. Häseler, T. Moroder, and N. Lütkenhaus, 77, 032303 (2008). J. Rigas, O. Gühne and N. Lütkenhaus, **73,** 012341 (2006). T. Takano, M. Fuyama, R. Namiki, and Y. Takahashi, 78, 010307(R) (2008). C. A. Fuchs and M. Sasaki, Quantum Inf. Comput. 3, 377, (2003).
M. Horodecki, P. W. Shor, and M. B. Ruskai, Rev. Math. Phys. [**15**]{}, 629-641 (2003).
A. Furusawa *et al*., Science **282,** 706 (1998); S.L. Braunstein and H.J. Kimble, , 869 (1998).
B. Julsgaard *et al.,* 432, 482 (2004) M. Lobino, C. Kupchak, E. Figueroa, and A.I. Lvovsky, **102,** 203601 (2009).
S.L. Braunstein, and P. van Loock, **77,** 513 (2005); N.J. Cerf, G. Leuchs, and E.S. Polzik (eds), *Quantum Information with Continuous Variables of Atoms and Light*, (Imperial College Press, 2007).
J. Niset *et al*., Phys. Rev. Lett. **98,** 260404 (2007).
A. S. Holevo, arXiv:1004.0196.
A. Peres, Phys. Rev. Lett. 77, 1413 (1996).
L-M. Duan, G. Giedke, J.I. Cirac, and P. Zoller, 84 2722, (2000); G. Adesso and F. Illuminati: J. Phys. A 40, 7821 (2007). A. S. Holevo, Probl. Inf. Transm. 44, 3, (2008).
D. Bru$\ss$ and C. Macchiavello, Phys. Lett. A 253, 249 (1999).
R.F. Werner, Phys. Rev. A 40, 4277 (1989). K.G.H. Vollbrecht and R.F. Werner, Phys. Rev. A 64, 062307 (2001).
|
---
author:
- 'Kentarou Mawatari[^1], Bettina Oexl'
bibliography:
- 'library.bib'
title: 'Monophoton signals in light gravitino production at $e^+ e^-$ colliders'
---
Introduction {#sec:intro}
============
Monophoton events with missing energy ($\gamma+{\slashed{E}}$) are one of the promising search channels to find new physics at both lepton and hadron colliders. So far no significant signal excess over the Standard Model (SM) background has been observed at the LEP [@Abbiendi:2000hh; @Heister:2002ut; @Achard:2003tx; @Abdallah:2003np] as well as at the Tevatron [@Acosta:2002eq; @Abazov:2008kp; @Aaltonen:2008hh] and the LHC [@Chatrchyan:2012tea; @Aad:2012fw], constraining various kinds of models, e.g. supersymmetry (SUSY) and extra dimensions.
The monophoton signal in the context of SUSY models has been searched for models where the gravitino is the lightest SUSY particle (LSP) with the very light mass $m_{3/2}\sim{\cal O}(10^{-14}-10^{-12}$ GeV) at the LEP [@Abbiendi:2000hh; @Heister:2002ut; @Achard:2003tx; @Abdallah:2003np] and the Tevatron [@Acosta:2002eq].[^2] In such scenarios there are two possible processes providing the signal: [*gravitino pair production*]{} (${\tilde G}{\tilde G}$) and [*neutralino-gravitino associated production*]{} ($\tilde\chi{\tilde G}$). The former leads the monophoton final state via an additional photon radiation, while the latter via the subsequent neutralino decay into a photon and a LSP gravitino.
The $\tilde\chi{\tilde G}$ associated production has been studied rather in details [@Fayet:1986zc; @Dicus:1990vm; @Lopez:1996gd; @Lopez:1996ey; @Baek:2002np; @Mawatari:2011cu], while the ${\tilde G}{\tilde G}(+\gamma)$ production has been investigated only in models where all SUSY particles except for the gravitino are too heavy to be produced on-shell [@Nachtmann:1984xu; @Brignole:1997sk; @Brignole:1998me].
For the last few years simulation tools in the [FeynRules]{} [@Christensen:2008py; @Duhr:2011se; @Alloul:2013bka] and [MadGraph]{} [@Alwall:2007st; @Alwall:2011uj] frameworks for processes involving gravitinos/goldstinos have been intensively developed [@Hagiwara:2010pi; @Mawatari:2011jy; @Christensen:2013aua], making phenomenological studies easier [@Mawatari:2011cu; @Argurio:2011gu; @deAquino:2012ru; @Mawatari:2012ui; @D'Hondt:2013ula; @Ferretti:2013wya]. It should be noted, however, that all the above recent studies (except [@Christensen:2013aua]) rely on the effective gravitino Lagrangian that contains only interactions with a single gravitino. To study the ${\tilde G}{\tilde G}$ production, we need a consistent implementation of all the relevant interactions including vertices involving two gravitinos as well as sgoldstinos, which are the superpartners of goldstinos and play an important role for the unitarity [@Bhattacharya:1988ey; @Bhattacharya:1988zp]. We also note that the process contains a four-fermion interaction involving two Majorana particles, which is not supported in the default [MadGraph]{}, and therefore special implementations are required.
In this article, we consider a scenario where the gravitino is the LSP and the lightest neutralino is the next-to-lightest SUSY particle (NLSP) and promptly decays into a photon and a gravitino. We revisit the monophoton plus missing energy signature for future $e^+e^-$ colliders $$\begin{aligned}
e^+e^-\to\gamma{\tilde G}{\tilde G}\to\gamma+{\slashed{E}},\end{aligned}$$ where, as mentioned, the ${\tilde G}{\tilde G}$ and $\tilde\chi{\tilde G}$ productions can be the dominant subprocesses. In order to study the whole parameter space for the both processes, including all the relevant SUSY particles as well as sgoldstinos, we construct a simple SUSY QED model with a goldstino multiplet in the gravitino-goldstino equivalence limit by using the superspace formalism. We investigate the $e^+e^-\to{\tilde G}{\tilde G}$ process in detail to see how the cross section deviates from that in models where all SUSY particles except for the gravitino are assumed to be heavy and integrated out. We generate the signal samples as well as the SM background, and analyze the signal cross sections and the photon spectra to extract information on the masses of the neutralino and selectrons as well as the gravitino mass, which is related to the SUSY breaking scale.
We note in passing that, although our study in this article focuses on lepton colliders, all the results are applicable for $\gamma+{\slashed{E}}$ as well as jet$+{\slashed{E}}$ signals at hadron colliders and the detailed study will be reported elsewhere.
The paper is organized as follows: In Sect. \[sec:model\] we construct a SUSY QED model including interactions with (s)goldstinos in the superspace formalism. In Sect. \[sec:gldpair\], we explore the parameter space in the $e^+e^-\to{\tilde G}{\tilde G}$ process, and briefly review the $e^+e^-\to\tilde\chi{\tilde G}$ process. In Sect. \[sec:signal\], we simulate the $e^+e^-\to\gamma{\tilde G}{\tilde G}$ process as well as the SM background, and show that the signal cross sections and the photon spectra provide information on the masses of the neutralino and selectrons as well as the gravitino mass. Sect. \[sec:summary\] is devoted to our summary. In Appendix \[sec:lag\] we give the relevant Lagrangian in terms of the component fields. In Appendix \[sec:aa\], to validate our model implementation of sgoldstinos, we briefly discuss the $\gamma\gamma\to{\tilde G}{\tilde G}$ process.
SUSY QED with a goldstino superfield {#sec:model}
====================================
In phenomenologically viable SUSY models, the SUSY breaking is usually assumed to happen in a so-called hidden sector and then being transmitted to the visible sector (i.e. the SM particles and their superpartners) through some mediation mechanism. As a result, one obtains effective couplings of the fields in the visible sector to the goldstino multiplet. To illustrate the interactions among the physical degrees of freedom of the goldstino multiplet and the fields in the visible sector, we discuss an $R$-parity conserving $N=1$ global supersymmetric model with the $U(1)_{\text{em}}$ gauge group in the superspace formalism. The model comprises one vector superfield $V=(A^{\mu},\lambda,D_V)$, describing a photon $A^{\mu}$ and a photino $\lambda$, and two chiral superfields $\Phi_{L}=({\tilde e}_{L},e_{L}, F_{L})$ and $\Phi_{R}=({\tilde e}^*_{R},e_{R}^{c},F_{R})$, containing the left- and right-handed electrons $e_{L/R}$ and selectrons ${\tilde e}_{L/R}$. In addition, we introduce a chiral superfield in the hidden sector $X=(\phi,{\tilde G},F_X)$, containing a sgoldstino $\phi$ and a goldstino ${\tilde G}$. $D_V$, $F_{L/R}$ and $F_X$ are auxiliary fields.
The Lagrangian of the visible sector is $$\begin{aligned}
\mathcal{L}_{\rm vis}=
&\sum_{i=L,R}{{\int d^4 \theta \,}}\,\Phi^{\dagger}_ie^{2g_eQ_iV}\Phi_i {\nonumber}\\
&+\frac{1}{4}\Big({{\int d^2 \theta \,}}\,W^{\alpha} W_{\alpha} +{{\text{h.c.}}}\Big),
\label{L_vis}\end{aligned}$$ where $g_e=\sqrt{4\pi\alpha}$ and $Q_i$ is the electric charge of $\Phi_i$, i.e. $Q_{R/L}=\pm 1$.[^3] $W_{\alpha}=-\frac{1}{4}\bar{D}\cdot\bar{D}D_{\alpha}V$ denotes the SUSY $U(1)_{\text{em}}$ field strength tensor with $D$ being the superderivative. $\mathcal{L}_{\rm vis}$ contains the kinetic terms as well as the gauge interactions.
The Lagrangian of the goldstino superfield is given by $$\begin{aligned}
\mathcal{L}_{X}=
&{{\int d^4 \theta \,}}\,X^{\dagger}X-\Big(F{{\int d^2 \theta \,}}\,X+{{\text{h.c.}}}\Big) {\nonumber}\\
&-\frac{c_X}{4}{{\int d^4 \theta \,}}\,(X^{\dagger}X)^2.
\label{L_hid}\end{aligned}$$ The first term gives the kinetic term of the (s)goldstino, while the second term is a source of SUSY breaking and $F\equiv\langle F_X\rangle$ is a vacuum expectation value (VEV) of $F_X$.[^4] The last term is non-renormalizable and provides interactions between the goldstino multiplet. This term also gives the sgoldstino mass term when replacing the auxiliary fields $F_X$ by the VEV, and hence we assign $c_X=m^2_{\phi}/F^2$.
The interactions among the (s)goldstinos and the fields in the visible sector as well as the soft mass terms for the selectrons and the photino are given by the effective Lagrangian $$\begin{aligned}
\mathcal{L}_{\text{int}}=&-\sum_{i=L,R}c_{\Phi_i}{{\int d^4 \theta \,}}\,
X^{\dagger}X\Phi_i^{\dagger}\Phi_i{\nonumber}\\
&-\Big(\frac{c_V}{4}{{\int d^2 \theta \,}}\, X W^{\alpha} W_{\alpha} +{{\text{h.c.}}}\Big),
\label{L_int}\end{aligned}$$ where we identify $c_{\Phi_i}=m^2_{{\tilde e}_i}/F^2$ and $c_V=2m_{\lambda}/F$.
We note that our model is minimal, yet enough to investigate the $\gamma+{\slashed{E}}$ signal at $e^+e^-$ colliders. We also note that our Lagrangian is model independent. However, studies of non-linear SUSY revealed that additional model dependent terms for four-point effective interactions involving two goldstinos and two matter fermions are allowed [@Luty:1998np; @Brignole:1997pe; @Clark:1997aa]. One possible source for such terms is $D$-type SUSY breaking [@Brignole:2003cm], which does not occur in our model.
Before turning to collider phenomenology, we briefly refer to the goldstino equivalence theorem. When the global SUSY is promoted to the local one, the goldstino is absorbed by the gravitino via the super-Higgs mechanism. In the high-energy limit, $\sqrt{s}\gg m_{3/2}$, which is always fulfilled for very light gravitinos at colliders, the interactions of the helicity 1/2 components are dominant, and can be well described by the goldstino interactions due to the graviton-goldstino equivalence theorem [@Casalbuoni:1988kv; @Casalbuoni:1988qd]. We also note that, as a consequence of the super-Higgs mechanism, the gravitino mass is related to the scale of the SUSY breaking and the Planck scale, in a flat space-time, as [@Volkov:1973jd; @Deser:1977uq] $$\begin{aligned}
m_{3/2}=\frac{F}{\sqrt{3}\,{\overline{M}_{\rm Pl}}},
\label{grav_mass}\end{aligned}$$ where ${\overline{M}_{\rm Pl}}\equiv M_{\rm Pl}/\sqrt{8\pi}\approx 2.4\times10^{18}$ GeV is the reduced Planck mass. Therefore, low-scale SUSY breaking scenarios provide a gravitino LSP. In the following, we simply call the goldstino the gravitino and also call the photino the (lightest) neutralino $\tilde\chi$. We note that by construction we ignore other neutralino mixing scenarios. Since the zino and higgsino mixing gives rise to the $Z$ and $H$ decay modes of the neutralino [@Ambrosanio:1996jn], the overall $\gamma+{\slashed{E}}$ rate decreases, but the property of the signal does not change. The extension of our model to the SM gauge group is straightforward to study the general minimal supersymmetric SM (MSSM); see e.g. [@Antoniadis:2010hs].
For completeness, we show the relevant interaction Lagrangians of , and in terms of the component fields in Appendix \[sec:lag\]. We have implemented the above Lagrangian by using the superspace module into [FeynRules 2]{} [@Alloul:2013bka], which provides the Feynman rules in terms of the physical component fields and the [UFO]{} model file [@Degrande:2011ua; @deAquino:2011ub] for matrix-element generators such as [MadGraph 5]{} [@Alwall:2011uj].
Light gravitino production at $e^+e^-$ colliders {#sec:gldpair}
================================================
Based on the model we constructed in the previous section, we investigate direct LSP gravitino production processes that lead to $\gamma+{\slashed{E}}$ at future $e^+e^-$ colliders. We consider the neutralino to be the NLSP and to promptly decay into a photon and a gravitino. The missing energy will be carried away by two gravitinos due to the $R$-parity conservation. Two distinct processes give rise to the signal: *gravitino pair production* (${\tilde G}{\tilde G}$) and *neutralino-gravitino associated production* (${\tilde{\chi}}{\tilde G}$), leading the monophoton final state via an additional photon radiation and via the subsequent neutralino decay, respectively. Their relative importance varies with the gravitino and neutralino masses as well as with kinematical cuts. In the following, a detailed discussion of the ${\tilde G}{\tilde G}$ production is presented, followed by a short review of the ${\tilde{\chi}}{\tilde G}$ production. According to the cross sections, we fix the benchmark points for our simulation in the next section. We also comment on the validation of our model implementation in the last part of this section.
Gravitino pair production {#sec:grav_pair_prod}
-------------------------
Gravitino pair production gives rise to the monophoton plus missing energy signature when an additional photon is emitted [@Nachtmann:1984xu; @Brignole:1997sk]. Here we present the helicity amplitudes explicitly for the two-to-two process $$\begin{aligned}
e^-\Big(p_1,\frac{\lambda_1}{2}\Big)
+e^+\Big(p_2,\frac{\lambda_2}{2}\Big)
\to
{\tilde G}\Big(p_3,\frac{\lambda_3}{2}\Big)
+{\tilde G}\Big(p_4,\frac{\lambda_4}{2}\Big),
\label{epluseminus_gldgld}\end{aligned}$$ where the four momenta ($p_i$) and helicities ($\lambda_i=\pm1$) are defined in the center-of-mass (CM) frame of the $e^+e^-$ collision. In the massless limit of $e^{\pm}$, one can find that all amplitudes are zero when both the electron and the positron have the same helicity, and hence we fix $\lambda_2=-\lambda_1$. The same helicity relation holds for the massless gravitinos in the final state, leading to $\lambda_4=-\lambda_3$. Since we will assume gravitinos with mass $m_{3/2}\sim\mathcal{O}(10^{-13}{\rm ~GeV})$, we neglect the gravitino mass in the phase space but keep it in the couplings. In addition, for the $\lambda_1=+1$ ($\lambda_1=-1$), only right-handed (left-handed) selectrons can contribute to the total amplitudes. Therefore, the helicity amplitudes for the above process can be expressed as the sum of the four-point contact amplitude and the $t,u$-channel selectron exchange amplitudes (see also Fig. \[fig:diagram\]): $$\begin{aligned}
\mathcal{M}^{}_{\lambda_1,\lambda_3}
=\mathcal{M}^c_{\lambda_1,\lambda_3}
+\mathcal{M}^t_{\lambda_1,\lambda_3}
+\mathcal{M}^u_{\lambda_1,\lambda_3}.
\label{amp_ee}\end{aligned}$$ Using the straightforward Feynman rules for Majorana fermions given in [@Denner:1992vza], the above amplitudes are written, based on the effective gravitino Lagrangian in Appendix \[sec:lag\], as $$\begin{aligned}
i\mathcal{M}^c_{\lambda_1,\lambda_3}
&=-\frac{im^2_{{\tilde e}_{\lambda_1}}}{F^2}
\big( \hat{\mathcal{M}}^t_{\lambda_1,\lambda_3}
-\hat{\mathcal{M}}^u_{\lambda_1,\lambda_3}\big), \\
i\mathcal{M}^t_{\lambda_1,\lambda_3}
&=-\frac{im^4_{{\tilde e}_{\lambda_1}}}{F^2(t-m^2_{{\tilde e}_{\lambda_1}})}\,
\hat{\mathcal{M}}^t_{\lambda_1,\lambda_3}, \\
i\mathcal{M}^u_{\lambda_1,\lambda_3}
&=\frac{im^4_{{\tilde e}_{\lambda_1}}}{F^2(u-m^2_{{\tilde e}_{\lambda_1}})}\,
\hat{\mathcal{M}}^u_{\lambda_1,\lambda_3},\end{aligned}$$ where $m_{{\tilde e}_{\pm}}$ denotes the right/left-handed selectron mass for notational convenience. The reduced helicity amplitudes are $$\begin{aligned}
\hat{\mathcal{M}}^t_{\lambda_1,\lambda_3}
&={\bar{u}}(p_3,\lambda_3)P_{\lambda_1}u(p_1,\lambda_1) {\nonumber}\\
&\quad\times {\bar{v}}(p_2,-\lambda_1)P_{-\lambda_1}v(p_4,-\lambda_3),{\nonumber}\\
\hat{\mathcal{M}}^u_{\lambda_1,\lambda_3}
&={\bar{u}}(p_4,-\lambda_3)P_{\lambda_1}u(p_1,\lambda_1) {\nonumber}\\
&\quad\times{\bar{v}}(p_2,-\lambda_1)P_{-\lambda_1}v(p_3,\lambda_3),\end{aligned}$$ where $P_{\pm}=\frac{1}{2}(1\pm\gamma^5)$ is the chiral projection operator.
![Samples of Feynman diagrams for gravitino pair production in $e^+e^-$ collisions, generated by (modified) [MadGraph 5]{} [@Alwall:2011uj]. [gld]{}, [el]{}, and [er]{} denote a gravitino, a left-handed selectron, and a right-handed selectron, respectively.[]{data-label="fig:diagram"}](diagram1 "fig:"){width="0.32\columnwidth"} ![Samples of Feynman diagrams for gravitino pair production in $e^+e^-$ collisions, generated by (modified) [MadGraph 5]{} [@Alwall:2011uj]. [gld]{}, [el]{}, and [er]{} denote a gravitino, a left-handed selectron, and a right-handed selectron, respectively.[]{data-label="fig:diagram"}](diagram2 "fig:"){width="0.32\columnwidth"} ![Samples of Feynman diagrams for gravitino pair production in $e^+e^-$ collisions, generated by (modified) [MadGraph 5]{} [@Alwall:2011uj]. [gld]{}, [el]{}, and [er]{} denote a gravitino, a left-handed selectron, and a right-handed selectron, respectively.[]{data-label="fig:diagram"}](diagram3 "fig:"){width="0.32\columnwidth"}
$\lambda_1\lambda_3$ $\mathcal{M}^c$ $\mathcal{M}^t$ $\mathcal{M}^u$
---------------------- --------------------------------------------------------------------- --------- ----------------- ------------------------------------------------------------------------- ------------------------------------------------------------------------- ---------
$\pm\,\mp$ $-\dfrac{s\,m^2_{{\tilde e}_{\lambda_1}}}{2F^2}\, (1-\cos{\theta})$ $\Big[$ $1$ $+\dfrac{m^2_{{\tilde e}_{\lambda_1}}}{t-m^2_{{\tilde e}_{\lambda_1}}}$ $\Big]$
$\pm\,\pm$ $-\dfrac{s\,m^2_{{\tilde e}_{\lambda_1}}}{2F^2}\,(1+\cos{\theta})$ $\Big[$ $1$ $+\dfrac{m^2_{{\tilde e}_{\lambda_1}}}{u-m^2_{{\tilde e}_{\lambda_1}}}$ $\Big]$
\[tb:helamp\_ee\]
With the four momenta defined as $$\begin{aligned}
p_1^{\mu}=&\frac{\sqrt{s}}{2}(1,0,0,1),{\nonumber}\\
p_2^{\mu}=&\frac{\sqrt{s}}{2}(1,0,0,-1),{\nonumber}\\
p_3^{\mu}=&\frac{\sqrt{s}}{2}(1,\sin{\theta},0,\cos{\theta}),{\nonumber}\\
p_4^{\mu}=&\frac{\sqrt{s}}{2}(1,-\sin{\theta},0,-\cos{\theta}),\end{aligned}$$ we present the helicity amplitudes in Table \[tb:helamp\_ee\]. The total cross section is given by $$\begin{aligned}
\sigma
&=\frac{1}{192\pi F^4}
\sum_{\lambda=\pm}\frac{m_{{\tilde e}_{\lambda}}^4}{s^2}
\bigg[ s^3
-3m_{{\tilde e}_{\lambda}}^2 s^2
+9m_{{\tilde e}_{\lambda}}^4 s {\nonumber}\\
&\hspace*{9mm}+3m_{{\tilde e}_\lambda}^{6}
\Big( 1-\frac{m^2_{{\tilde e}_{\lambda}}}{s+m^2_{{\tilde e}_{\lambda}}}
+4\log{\frac{m_{{\tilde e}_{\lambda}}^2}{s+m_{{\tilde e}_{\lambda}}^2}}\Big)
\bigg].
\label{xsec_gldgld}\end{aligned}$$ Figure \[fig:xsec\_rs\] shows the total cross sections as a function of the CM energy $\sqrt{s}$ for three different selectron masses $m_{{\tilde e}_{\pm}}=0.5$, 1 and 2 TeV. The gravitino mass is fixed at $m_{3/2}=2\times10^{-13}$ GeV, which corresponds by to the SUSY breaking scale $\sqrt{F}\approx 918$ GeV. We stress that the cross section is extremely sensitive to the gravitino mass since it scales inversely proportionally to the gravitino mass to the fourth, $$\begin{aligned}
\sigma({\tilde G}{\tilde G})\propto 1/m_{3/2}^4.
\label{sig_gldgld}\end{aligned}$$ We also note that the cross section tends to be larger for the heavier selectrons since the couplings are proportional to $m_{{\tilde e}}^2$.
![Total cross sections of $e^+e^-\to{\tilde G}{\tilde G}$ as a function of the collision energy for different selectron masses $m_{{\tilde e}_{\pm}}=0.5,1,2$ TeV with $m_{3/2}=2\times10^{-13}$ GeV. The cross section in the low-energy limit is presented by a black solid line. The contribution without the four-point interaction for $m_{{\tilde e}_{\pm}}=1$ TeV is also shown as a reference.[]{data-label="fig:xsec_rs"}](xsec_rs){width="0.88\columnwidth"}
In the low-energy limit, $\sqrt{s}\ll m_{{\tilde e}_{\pm}}$, as one can easily see from the explicit amplitudes in Table \[tb:helamp\_ee\], a strong cancellation happens between $\mathcal{M}^c$ and $\mathcal{M}^{t,u}$, leading to a cross section scaling as [@Brignole:1997pe; @Brignole:1997sk] $$\begin{aligned}
\sigma=\frac{s^3}{160\pi F^4},
\label{s3}\end{aligned}$$ presented by a black line in Fig. \[fig:xsec\_rs\]. The contribution without the four-point amplitude is also shown as a reference, where one can see the effect of the huge cancellation. It should be noted here that the low-energy limit, which is always assumed in the previous studies [@Nachtmann:1984xu; @Brignole:1997sk; @Checchia:1999dr; @Gopalakrishna:2001iv], may not be a good approximation for future colliders since the selectron masses should be less or of the order of the SUSY breaking scale and might be within the reach of the CM energies. Therefore, one should consider the full expression of the cross section. Figure \[fig:xsec\_rs\] indeed shows that, as $\sqrt{s}$ is increasing, the effect of the selectron mass becomes significant. When the CM energy is bigger than the selectron mass, $\sqrt{s}>m_{{\tilde e}}$, the contribution from $\mathcal{M}^c$ becomes more important than that from $\mathcal{M}^{t,u}$. We note that the current gravitino mass bound by the ${\tilde G}{\tilde G}(+\gamma)$ production could weaken if the selectrons are light enough.
Finally, we briefly discuss the unitarity bound. The projected partial wave amplitude is given by $$\begin{aligned}
{\cal J}^J_{{\lambda}_1,{\lambda}_3}
=\frac{1}{32\pi}\int_{-1}^1d \cos{\theta}\,
d^J_{{\lambda}_1{\lambda}_3}(\theta)\,\mathcal{M}^{}_{{\lambda}_1,{\lambda}_3}\end{aligned}$$ with the Wigner $d$-function. Unitarity requires the lowest non-vanishing partial wave to be $|{\cal J}^{J=1}_{{\lambda}_1,{\lambda}_3}|<1/2$, leading to the upper bound of the cross section, which is shown by a gray line in Fig. \[fig:xsec\_rs\]. One can see that the lighter selectrons remedy the bad unitarity behavior. It should also be noted that, since we consider the effective model which is valid up to $m_{\rm SUSY}/F$, a higher energy requires a higher SUSY breaking scale (i.e. a heavier gravitino) or lighter SUSY particles for reliable predictions.
Neutralino-gravitino associated production
------------------------------------------
Gravitino production in association with a neutralino and the subsequent neutralino decay, $$\begin{aligned}
e^+e^-\to{\tilde{\chi}}{\tilde G}\to\gamma{\tilde G}{\tilde G},\end{aligned}$$ leads to the $\gamma+{\slashed{E}}$ signal already at the leading order [@Fayet:1986zc; @Dicus:1990vm; @Lopez:1996gd; @Lopez:1996ey; @Baek:2002np; @Mawatari:2011cu].[^5] We refer to the recent study [@Mawatari:2011cu] for a detailed discussion.
Here, we briefly point out two important features of this process. First, unlike the gravitino pair production , the total cross section is inversely proportional to the square of the gravitino mass $$\begin{aligned}
\sigma({\tilde{\chi}}{\tilde G})\propto 1/m_{3/2}^2,
\label{sig_n1gld}\end{aligned}$$ as seen in the left plot in Fig. \[fig:mgld\_dep\], and hence the sensitivity to the gravitino mass is weaker than in the ${\tilde G}{\tilde G}$ production. The cross section depends also on the $t,u$-channel exchange selectron masses, and increases for the heavier selectrons as in the ${\tilde G}{\tilde G}$ production.
{width="0.32\columnwidth"} {width="0.32\columnwidth"} {width="0.32\columnwidth"} {width="0.32\columnwidth"} {width="0.32\columnwidth"} {width="0.32\columnwidth"}
{width="0.88\columnwidth"} {width="0.88\columnwidth"}
Second, since the ${\tilde{\chi}}\to\gamma{\tilde G}$ decay is isotropic, the photon distribution is given by purely kinematical effects of the decaying neutralino. The partial decay width for a photino-like neutralino is given by $$\begin{aligned}
\Gamma({\tilde{\chi}}\to\gamma{\tilde G})=\frac{m^5_{{\tilde{\chi}}}}{16\pi F^2}.\end{aligned}$$ For instance, for $m_{{\tilde{\chi}}}=750$ GeV and $m_{3/2}=2\times10^{-13}$ GeV (i.e. $\sqrt{F}\approx 918$ GeV), the width is 6.6 GeV. With the neutralino being the NLSP, the branching ratio is unity, $B({\tilde{\chi}}\to\gamma{\tilde G})=1$.
Physics parameters
------------------
To examine a viable SUSY parameter space for the $\gamma+{\slashed{E}}$ signal at future $e^+e^-$ colliders, we present in Fig. \[fig:mgld\_dep\] the total cross sections of $e^+e^-\to\gamma{\tilde G}{\tilde G}$ at $\sqrt{s}=1$ TeV as a function of the gravitino mass (left) and the neutralino mass (right), where we fix the left- and right-handed selectron masses at 2 TeV. The representative Feynman diagrams for the process are depicted in Fig. \[fig:diag\]. The contributions of the ${\tilde G}{\tilde G}$ and ${\tilde{\chi}}{\tilde G}$ productions are separately shown by red and blue lines, respectively.
As discussed in and and shown in the left plot in Fig. \[fig:mgld\_dep\], the cross sections of the both subprocesses strongly depend on the gravitino mass.
The monophoton signal from the gravitino pair (${\tilde G}{\tilde G}+\gamma$) is suppressed by the QED coupling $\alpha$ with respect to the two-to-two process and strongly depends on the kinematical cuts due to the soft and collinear singularity of the initial state radiation. The cut dependence on the photon energy is presented in the left plot in Fig. \[fig:mgld\_dep\]. On the other hand, since the energy of the photons coming from the neutralino decay is restricted as $$\begin{aligned}
\frac{m^2_{{\tilde{\chi}}}}{2\sqrt{s}}<E_{\gamma}<\frac{\sqrt{s}}{2},
\label{photon_energy}\end{aligned}$$ the signal of ${\tilde{\chi}}{\tilde G}$ is not affected by the lower cuts on the photon energy unless the neutralino is light.
In the following, we impose the minimal cuts for the detection of photons as $$\begin{aligned}
E_{\gamma}>0.03\,\sqrt{s},\quad |\eta_{\gamma}|<2,
\label{minimal_cuts}\end{aligned}$$ and fix the gravitino mass at $2\times 10^{-13}$ GeV, which lies above the current exclusion limit by the jet$+{\slashed{E}}$ search at the LHC for the gravitino production in association with a gluino or a squark with masses around 500 GeV [@ATLAS:2012zim].[^6] [^7]
The right plot of Fig. \[fig:mgld\_dep\] shows the neutralino mass dependence of the full signal cross section with the minimal cuts . While the ${\tilde G}{\tilde G}$ contribution is independent of the neutralino mass, the contribution from the ${\tilde{\chi}}{\tilde G}$ production is strongly suppressed when the neutralino mass approaches the CM energy due to the phase space closure. Therefore, the dominant subprocess can be different for different neutralino masses, giving rise to distinctive photon spectra. It should be noted that the interference between the two subprocesses is very small unless the neutralino width is too large. We verified this numerically by computing the two subprocess separately and checking that the sum of those reproduces the full $e^+e^-\to\gamma{\tilde G}{\tilde G}$ cross section, as in the figure. We suppress a possible contribution from the sgoldstinos by taking their masses to be too heavy to be produced on-shell.[^8] We note that, if those are lighter than the $e^+e^-$ collision energy, the sgoldstino production in association with a photon and the subsequent decay contributes to the $\gamma{\tilde G}{\tilde G}$ final state. In Appendix \[sec:aa\] we briefly discuss the effect of sgoldstinos in the $\gamma\gamma\to{\tilde G}{\tilde G}$ process.
In the following, we focus on three different neutralino masses which exemplify different distributions. First, we fix the neutralino mass at 750 GeV so that $\sigma({\tilde{\chi}}{\tilde G})\sim\sigma({\tilde G}{\tilde G}+\gamma)$. We subsequently take a lighter (heavier) neutralino at 650 (850) GeV so that the ${\tilde{\chi}}{\tilde G}$ (${\tilde G}{\tilde G}$) production is dominant.
Technical setup and validation
------------------------------
Before moving to the simulation, let us comment on our model implementation and the validation. As mentioned in Sect. \[sec:intro\], the current [MadGraph 5]{} (v2.0.2) [@Alwall:2011uj] does not support four-fermion vertices involving more than one Majorana particle, and hence does not accept our [UFO]{} model file [@Degrande:2011ua; @deAquino:2011ub] generated with [FeynRules]{} [@Alloul:2013bka]. Therefore, first, we modified [MadGraph 5]{} to allow us to import the model. Second, after generating the process, the corresponding four-point contact amplitudes should be modified by hand to have correct fermion flows. We have explicitly checked our numerical results of the total and differential cross sections by comparing with the analytic results for the two-to-two process in Sect. \[sec:grav\_pair\_prod\] as well as for the two-to-three process in the low-energy limit, $\sqrt{s}\ll m_{{\tilde e},{\tilde{\chi}},S,P}$, given in [@Brignole:1997sk]. We have also checked precise agreements for the ${\tilde{\chi}}{\tilde G}$ process with the previous model implementations [@Mawatari:2011jy; @Argurio:2011gu; @Mawatari:2012ui], which are constructed based on the effective gravitino Lagrangian in terms of the component fields, i.e. not by using the superspace module. We note that our model implementation allows us to generate different contributing processes, i.e. ${\tilde G}{\tilde G}$ and ${\tilde{\chi}}{\tilde G}$, within one event simulation.
Monophoton plus missing energy {#sec:signal}
==============================
We now perform the simulation of monophoton events with missing energy for a future $e^+e^-$ collider. An irreducible SM background comes from $e^+e^-\to\gamma\nu\bar{\nu}$. To remove contributions from $e^+e^-\to\gamma Z\to\gamma\nu\bar{\nu}$, we impose the $Z$-peak cut $$\begin{aligned}
E_{\gamma}<\frac{s-m_Z^2}{2\sqrt{s}}-5\Gamma_Z,
\label{Zpeak_cut}\end{aligned}$$ in addition to the minimal cuts . The background from the $t$-channel $W$-exchange process, which is the most significant one, can be efficiently reduced by using a positively polarized $e^-$ beam and a negatively polarized $e^+$ beam.
In Table \[tb:cross\_section\], the signal cross sections of each subprocess, ${\tilde{\chi}}{\tilde G}$ and ${\tilde G}{\tilde G}$, as well as the SM background at $\sqrt{s}=1$ TeV are presented without and with polarized $e^{\pm}$ beams, where we take the beam polarization $P_{e^{\pm}}\, (|P_{e^{\pm}}|\leq1)$ as[^9] $$\begin{aligned}
(P_{e^-},P_{e^+})=(0.9,-0.6),
\label{polarization}\end{aligned}$$ and apply the kinematical cuts of and . For the SUSY signal, we take the three benchmark neutralino masses with the gravitino mass fixed at $2\times10^{-13}$ GeV for $m_{{\tilde e}}=1$ and 2 TeV. As discussed in the previous section, heavier selectrons give the higher cross sections of the both subprocesses. Since the signal cross section with $e^{\pm}$ beam polarizations is given by $$\begin{aligned}
\sigma(P_{e^-},P_{e^+}) &= 2\sum_{{\lambda}_1}
\Big(\frac{1+P_{e^-}{\lambda}_1}{2}\Big)\Big(\frac{1-P_{e^+}{\lambda}_1}{2}\Big)\,
{\sigma}_{{\lambda}_1},\end{aligned}$$ the signal cross sections are enhanced by a factor of 1.54 with the above polarizations. On the other hand, the SM background is significantly reduced.
Figure \[fig:photon\_energy\] presents the photon energy $E_{\gamma}$ (left) and rapidity $\eta_{\gamma}$ (right) distributions for the three signal benchmarks and for the SM background. The signal energy spectra show two distinct features. First, there is a peak in the low-energy region which arises from the ${\tilde G}{\tilde G}$ production process since the initial state radiation is dominant as in the SM background. We also note that the low-energy spectra are independent of the neutralino mass. Second, there is a flat contribution in the high-energy region coming from ${\tilde{\chi}}{\tilde G}$ production, reflecting the isotropic neutralino decay. The contribution becomes smaller for the heavier neutralino (see also Table \[tb:cross\_section\]), and the lower edge allows us to extract the neutralino mass from .
The rapidity distributions are distinctive between the signal and the SM background. The photon coming from ${\tilde G}{\tilde G}$ production gives a flat $\eta_{\gamma}$ distribution while the photon coming from the neutralino decay results in the central region (see [@Mawatari:2011cu] for a detailed discussion of the selectron mass dependence). In contrast, the photons of the SM background are emitted in the forward region.
[rr||rr|rr|rr]{} && & && \[fb\]\
$(P_{e^-},P_{e^+})$&$m_{{\tilde{\chi}}}$ \[GeV\] &${\tilde{\chi}}{\tilde G}$& ${\tilde G}{\tilde G}$&${\tilde{\chi}}{\tilde G}$& ${\tilde G}{\tilde G}$ &SM bkg &\
\
&650 &19.7 &&49.2 &&\
$(0,0)$ &750 &6.0 &10.4 &15.8 &21.1&1452\
&850 &1.0 & &2.5 &&\
\
&650 &30.4 & &75.8 &&\
$(0.9,-0.6)$ &750 &9.2 &16.1 &24.3 &32.7&64.9\
&850 &1.5 & &3.4&&\
\[tb:cross\_section\]
![Photon energy (left) and rapidity (right) distributions for $e^+e^-\to\gamma{\tilde G}{\tilde G}$ at $\sqrt{s}=1$ TeV for different neutralino masses with $m_{3/2}=2\times10^{-13}$ GeV and $m_{{\tilde e}_{\pm}}=2$ TeV. The kinematical cuts in and as well as the beam polarizations in are applied. The SM background is also shown.[]{data-label="fig:photon_energy"}](dis_e "fig:"){width="0.492\columnwidth"} ![Photon energy (left) and rapidity (right) distributions for $e^+e^-\to\gamma{\tilde G}{\tilde G}$ at $\sqrt{s}=1$ TeV for different neutralino masses with $m_{3/2}=2\times10^{-13}$ GeV and $m_{{\tilde e}_{\pm}}=2$ TeV. The kinematical cuts in and as well as the beam polarizations in are applied. The SM background is also shown.[]{data-label="fig:photon_energy"}](dis_r "fig:"){width="0.492\columnwidth"}
![Normalized photon energy distributions for $e^+e^-\to\gamma{\tilde G}{\tilde G}$ at $\sqrt{s}=1$ TeV for $m_{{\tilde e}_{\pm}}=0.5$, 2, 10 TeV and for the high-mass limit, where the kinematical cuts are applied. The ratios to the case in the high-mass limit are also shown.[]{data-label="fig:ratio_norm_xs"}](dis_sel){width="0.7\columnwidth"}
Finally, we discuss the selectron mass dependence of the low-energy peak, which arises purely from ${\tilde G}{\tilde G}$ production. As discussed in Sect. \[sec:grav\_pair\_prod\], the total rate of the $e^+e^-\to{\tilde G}{\tilde G}$ process depends on the selectron masses. In addition, the photon spectrum becomes harder for lighter selectrons; see Fig. \[fig:ratio\_norm\_xs\], where we show the normalized photon energy distributions for $m_{{\tilde e}_{\pm}}=0.5$, 2, 10 TeV and for the $\sqrt{s}/{m_{{\tilde e}}}=0$ limit [@Brignole:1997sk]. The distribution for $m_{{\tilde e}_{\pm}}=10$ TeV is in good agreement with the one in the high-mass limit. We note that in this limit the $e^+e^-\to\gamma{\tilde G}{\tilde G}$ differential cross section can be described by the $e^+e^-\to{\tilde G}{\tilde G}$ cross section times the standard photon splitting function in a good approximation [@Brignole:1997sk].
Summary {#sec:summary}
=======
Direct gravitino productions can be observed in current and future collider experiments if the gravitino is very light. In this article, we revisited gravitino pair production and neutralino-gravitino associated production, and studied the $\gamma+{\slashed{E}}$ signal for future $e^+e^-$ colliders.
By using the superspace formalism, we constructed a simple SUSY QED model that allows us to study the parameter space for the both processes, and implemented the model in the [FeynRules]{} and [MadGraph 5]{} frameworks. We note that special implementations are needed to treat the Majorana four-fermion interaction in [MadGraph 5]{}.
We discussed the parameter dependence of the signal cross sections in detail, and showed that the relative importance between the two signal processes varies with the gravitino and neutralino masses as well as with kinematical cuts.
We performed the event simulation for the SUSY signal as well as the SM background, taking into account the signal selection cut and the beam polarizations, and showed that the photon spectra from the two subprocesses are very distinctive. This is because the photon coming from the ${\tilde G}{\tilde G}$ production is mostly initial state radiation, while the ${\tilde{\chi}}{\tilde G}$ associated production process leads to an energetic photon from the neutralino decay. We expect that future $e^+e^-$ colliders could explore the parameter space around our benchmark points and hence provide information on the masses of the relevant SUSY particles as well as the SUSY breaking scale.
Before closing, we note that the extension of our simple SUSY QED model to the general MSSM is straightforward, which is applicable for hadron colliders.
Acknowledgments {#acknowledgments .unnumbered}
===============
We wish to thank B. Fuks and O. Mattelaer for their help with [FeynRules]{} and [MadGraph5]{} and F. Maltoni and C. Petersson for useful discussions. We also thank Y. Takaesu and P. Tziveloglou for helpful discussions and comments on the draft. This work has been supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37, and by the Strategic Research Program “High Energy Physics” and the Research Council of the Vrije Universiteit Brussel.
Lagrangian in terms of the component fields {#sec:lag}
===========================================
In Sect. \[sec:model\] we gave the Lagrangian of our model in terms of the superfields. In this appendix, for completeness, we present the corresponding interaction Lagrangian in terms of the component fields. The relevant terms of the effective interaction Lagrangian among gravitinos (i.e. goldstinos) $\psi_{{\tilde G}}$ and fields in the visible sector, that is, right- and left-handed selectron $\phi_{{\tilde e}_{\pm}}$, electron $\psi_e$, photino-like neutralino $\psi_{{\tilde{\chi}}}$,[^10] and photon $A^{\mu}$ are given in the four-component notation by $$\begin{aligned}
\mathcal{L}_{{\tilde G}}&=\mp\frac{im_{{\tilde e}_{\pm}}^2}{F}({{\bar\psi}}_{{\tilde G}}P_{\pm}\psi_e\phi^{*}_{{\tilde e}_{\pm}}-{{\bar\psi}}_eP_{\mp}\psi_{{\tilde G}}\phi_{{\tilde e}_{\pm}}){\nonumber}\\
&\quad-\frac{m_{{\tilde{\chi}}}}{4\sqrt{2}F}\,{{\bar\psi}}_{{\tilde G}}[\gamma^{\mu},\gamma^{\nu}]\psi_{{\tilde{\chi}}}F_{\mu\nu}{\nonumber}\\
&\quad-\frac{m^2_{{\tilde e}_{\pm}}}{F^2}\,{{\bar\psi}}_eP_{\mp}\psi_{{\tilde G}}\,{{\bar\psi}}_{{\tilde G}}P_{\pm}\psi_e, \end{aligned}$$ where $P_{\pm}=\frac{1}{2}(1\pm\gamma^5)$ is the chiral projection operator and $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ the photon field strength tensor. The interactions among sgoldstino $\phi=\frac{1}{\sqrt{2}}(\phi_S+i\phi_P)$ and gravitino or photon are given by $$\begin{aligned}
\mathcal{L}_{S,P}&=
-\frac{m^2_{\phi}}{2\sqrt{2}F}\,{{\bar\psi}}_{{\tilde G}}(\phi_S+i\gamma^5\phi_P)\psi_{{\tilde G}}{\nonumber}\\
&\quad+\frac{m_{{\tilde{\chi}}}}{2\sqrt{2}F}(\phi_SF^{\mu\nu}F_{\mu\nu}-\phi_PF^{\mu\nu}\tilde{F}_{\mu\nu}),\end{aligned}$$ where $\tilde{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}$ is the dual tensor with $\epsilon_{0123}=+1$. All other relevant terms in the visible sector are $$\begin{aligned}
\mathcal{L}_{\rm vis}&=g_e{{\bar\psi}}_e\gamma_{\mu}\psi_eA^{\mu}
+ig_e(\phi^{*}_{{\tilde e}_{\pm}}\overleftrightarrow{\partial_{\mu}}\phi_{{\tilde e}_{\pm}})A^{\mu}
{\nonumber}\\
&\quad\mp\sqrt{2}g_e({{\bar\psi}}_{{\tilde{\chi}}}P_{\pm}\psi_e\phi^*_{{\tilde e}_{\pm}} +
{{\bar\psi}}_eP_{\mp}\psi_{{\tilde{\chi}}}\phi_{{\tilde e}_{\pm}}),\end{aligned}$$ where $g_e=\sqrt{4\pi\alpha}$ is the QED coupling constant.
We note that we follow the convention of the SUSY Les Houches accord [@Skands:2003cj] for the covariant derivative and the gaugino and gravitino field definitions. To translate our Lagrangian into the [FeynRules]{} convention, one has to change the coupling as $g_e\to-g_e$, and redefine the fields as $\psi_{{\tilde{\chi}}}\to -\psi_{{\tilde{\chi}}}$ and $\psi_{{\tilde G}}\to -\psi_{{\tilde G}}$.
Gravitino pair production in $\gamma \gamma$ collisions {#sec:aa}
=======================================================
In this article we assumed that the sgoldstinos are too heavy to be produced on-shell, and hence those are irrelevant to the $e^+e^-\to\gamma{\tilde G}{\tilde G}$ process. However, our model has no limitation to study processes involving sgoldstinos by construction in the superspace formalism. In this appendix, to validate our model implementation of sgoldstinos, we discuss gravitino pair production in $\gamma\gamma$ collisions, where the sgoldstinos play an important role for the unitarity [@Bhattacharya:1988ey; @Bhattacharya:1988zp].[^11]
Similar to Sect. \[sec:grav\_pair\_prod\], we present the helicity amplitude explicitly for the process $$\begin{aligned}
\gamma\left(p_1,\lambda_1\right) + \gamma\left(p_2,\lambda_2\right)
\to
{\tilde G}\Big(p_3,\frac{\lambda_3}{2}\Big)
+{\tilde G}\Big(p_4,\frac{\lambda_4}{2}\Big),
\label{xs_aa_gldgld} \end{aligned}$$ where the four momenta ($p_i$) and helicities ($\lambda_i=\pm1$) are defined in the center-of-mass (CM) frame of the $\gamma\gamma$ collision. As seen in Fig. \[fig:diagramaa\], in our SUSY QED model, the helicity amplitudes are given by the sum of the $s$-channel scalar ($S$) and pseudoscalar ($P$) sgoldstino amplitudes and the $t,u$-channel photino-like neutralino exchange amplitudes: $$\begin{aligned}
&\mathcal{M}_{\lambda_1\lambda_2,\lambda_3\lambda_4}
=\epsilon_{\mu}(p_1,\lambda_1)\epsilon_{\nu}(p_2,\lambda_2) {\nonumber}\\
&\hspace*{1cm}\times\big( \mathcal{M}^{S,\mu\nu}_{\lambda_3\lambda_4}
+\mathcal{M}^{P,\mu\nu}_{\lambda_3\lambda_4}
+\mathcal{M}^{t,\mu\nu}_{\lambda_3\lambda_4}
+\mathcal{M}^{u,\mu\nu}_{\lambda_3\lambda_4}\big),\end{aligned}$$ where the photon wavefunctions are factorized. Using the straightforward Feynman rules for Majorana fermions [@Denner:1992vza], the above amplitudes are written, based on the effective Lagrangian in Appendix \[sec:lag\], as $$\begin{aligned}
&i\mathcal{M}^{S,\mu\nu}_{\lambda_3\lambda_4}=
-\frac{im_{{\tilde{\chi}}}m^2_{\phi}}{F^2}\frac{1}{s-m_{\phi}^2}\,
(p_1\cdot p_2\,g^{\mu\nu}-p_2^{\mu}p_1^{\nu}) {\nonumber}\\
&\hspace*{1.5cm}\times{\bar{u}}(p_3,\lambda_3)v(p_4,\lambda_4),\\
&i\mathcal{M}^{P,\mu\nu}_{\lambda_3\lambda_4}=
-\frac{im_{{\tilde{\chi}}}m^2_{\phi}}{F^2}\frac{1}{s-m_\phi^2}\,
\epsilon^{\mu\nu\alpha\beta}\,{p_2}_{\alpha}{p_1}_{\beta} {\nonumber}\\
&\hspace*{1.5cm}\times{\bar{u}}(p_3, \lambda_3)i\gamma^5v(p_4,\lambda_4),\\
&i\mathcal{M}^{t,\mu\nu}_{\lambda_3\lambda_4}=
-\frac{im^2_{{\tilde{\chi}}}}{8F^2}\frac{1}{t-m^2_{{\tilde{\chi}}}} {\nonumber}\\
&\hspace*{0.6cm}\times{\bar{u}}(p_3,\lambda_3)[\gamma^{\mu},\slashed{p}_1]
(\slashed{p}_1-\slashed{p}_3-m_{{\tilde{\chi}}})
[\slashed{p}_2,\gamma^{\nu}]v(p_4,\lambda_4), \\
&i\mathcal{M}^{u,\mu\nu}_{\lambda_3\lambda_4}=
\frac{im^2_{{\tilde{\chi}}}}{8F^2}\frac{1}{u-m^2_{{\tilde{\chi}}}} {\nonumber}\\
&\hspace*{0.6cm}\times{\bar{u}}(p_3,\lambda_3)[\gamma^{\mu},\slashed{p}_2]
(\slashed{p}_1-\slashed{p}_4+m_{{\tilde{\chi}}})
[\slashed{p}_1,\gamma^{\nu}]v(p_4,\lambda_4),\end{aligned}$$ where the common sgoldstino mass is taken as $m_{S,P}=m_{\phi}$. The reduced helicity amplitudes $\hat{\mathcal{M}}$ are defined as $$\begin{aligned}
\mathcal{M}_{\lambda_1\lambda_2,\lambda_3\lambda_4}
=\frac{m_{{\tilde{\chi}}}s^{3/2}}{2F^2}\,
\hat{\mathcal{M}}_{\lambda_1\lambda_2,\lambda_3\lambda_4},
\label{red_hel_amp}\end{aligned}$$ and presented in Table \[tb:helamp\_aa\]. The analytic expression for the total cross section can be found in [@Brignole:1996fn], and our numerical results agree well with it.
![Feynman diagrams for gravitino pair production in $\gamma\gamma$ collisions, generated by [MadGraph 5]{} [@Alwall:2011uj]. [gld]{}, [sg]{}, [pg]{}, and [n1]{} denote a gravitino, a scalar sgoldstino, a pseudoscalar sgoldstino, and a neutralino, respectively.[]{data-label="fig:diagramaa"}](aa_gldgld){width="1\columnwidth"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$\lambda_1\lambda_2\,\lambda_3\lambda_4$ $\hat{\mathcal{M}}^S$ $\hat{\mathcal{M}}^P$ $\hat{\mathcal{M}}^t$ $\hat{\mathcal{M}}^u$
------------------------------------------ ------------ ------------------------------------ ------------------------------------- ---------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- ---------
$\pm\pm\,\pm\pm$ $\mp\Big[$ $\dfrac{m^2_{\phi}}{s-m^2_{\phi}}$ $-\dfrac{m^2_{\phi}}{s-m^2_{\phi}}$ $\Big]$
$\pm\pm\,\mp\mp$ $\pm\Big[$ $\dfrac{m^2_{\phi}}{s-m^2_{\phi}}$ $+\dfrac{m^2_{\phi}}{s-m^2_{\phi}}$ $ -\dfrac{m^2_{{\tilde{\chi}}}}{t-m^2_{{\tilde{\chi}}}}(1-\cos{\theta})$ $-\dfrac{m^2_{{\tilde{\chi}}}}{u-m^2_{{\tilde{\chi}}}}(1+\cos{\theta})$ $\Big]$
$\pm\mp\,\pm\mp$ $\dfrac{m_{{\tilde{\chi}}} \sqrt{s} }{
(u-m^2_{{\tilde{\chi}}})}\dfrac{1}{2}(1+\cos{\theta}) \sin{\theta}$
$\pm\mp\,\mp\pm$ $-\dfrac{m_{{\tilde{\chi}}} \sqrt{s}}{
(t-m^2_{{\tilde{\chi}}})} \dfrac{1}{2}(1-\cos{\theta})\sin{\theta}$
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[tb:helamp\_aa\]
Figure \[fig:sqrts\_dep\_aa\_mSgold1TeV\] shows the total cross sections as a function of the CM energy $\sqrt{s}$ for $m_{{\tilde{\chi}}}=0.5$ TeV (blue) and $m_{{\tilde{\chi}}}=2$ TeV (red) with $m_{3/2}=2\times10^{-13}$ GeV. First, let us consider the heavy sgoldstino case, $m_{\phi}=100$ TeV. In the low-energy limit, $\sqrt{s}\ll m_{\phi,{\tilde{\chi}}}$, similar to the $e^+e^-$ collision , the total cross section is given by [@Brignole:1996fn] $$\begin{aligned}
\sigma=\frac{s^3}{640\pi F^4},
\label{xs_aa_low_energy}\end{aligned}$$ shown by a black-solid line in Fig \[fig:sqrts\_dep\_aa\_mSgold1TeV\]. Due to a cancellation between the sgoldstino and neutralino amplitudes for $\lambda_1=\lambda_2=-\lambda_3=-\lambda_4$ as can be seen in Table \[tb:helamp\_aa\], the dominant contribution is given by the amplitudes for $\lambda_1=-\lambda_2$, which are proportional to $s^2$ in the low-energy limit. To emphasize the importance of the interference, the contribution without the sgoldstino amplitudes is also shown by a dotted line in Fig. \[fig:sqrts\_dep\_aa\_mSgold1TeV\]. On the other hand, in the case where the neutralino mass is smaller than the CM energy, $m_{{\tilde{\chi}}}\ll\sqrt{s}\ll m_{\phi}$, the cross section is dominated by the sgoldstino contributions and deviates from the one in the low-energy limit.
![Total cross sections of $\gamma\gamma\to{\tilde G}{\tilde G}$ as a function of the collision energy for $m_{3/2}=2\times10^{-13}$ GeV. The sgoldstino masses are taken to be 1 TeV (dashed) and 100 TeV (solid), while the neutralino mass is fixed at 0.5 TeV (blue) and 2 TeV (red). We also show the cross section in the low energy limit (black solid) as well as the contributions without the sgoldstino interactions (dotted).[]{data-label="fig:sqrts_dep_aa_mSgold1TeV"}](xsec_rs_aa){width="0.88\columnwidth"}
We now turn to the case where the sgoldstinos are relatively light, $m_{\phi}=1$ TeV. In our SUSY QED model, the partial decay width of the sgoldstinos are given by [@Perazzi:2000id] $$\begin{aligned}
\Gamma(S,P\to{\tilde G}{\tilde G}) &=\frac{m^5_{\phi}}{32 \pi F^2}, \\
\Gamma(S,P\to\gamma\gamma) &=\frac{m^2_{{\tilde{\chi}}}m^3_{\phi}}{32\pi F^2}.\end{aligned}$$ For $m_{\phi}=1$ TeV and $m_{3/2}=2\times 10^{-13}$ GeV (i.e. $\sqrt{F}\approx 918$ GeV), the width for a gravitino pair is 14.0 GeV and for a photon pair is 3.5 (55.9) GeV for $m_{{\tilde{\chi}}}=0.5$ (2) TeV. For the $m_{{\tilde{\chi}}}=2$ TeV case, the finite width effect can be seen as a deviation from the cross section in the low-energy region in Fig. \[fig:sqrts\_dep\_aa\_mSgold1TeV\]. For $\sqrt{s}\approx m_{\phi}$, one can clearly see the resonant peak. In the high-energy limit, $\sqrt{s}\gg m_{\phi,{\tilde{\chi}}}$, the cross section approaches the value obtained by neglecting the sgoldstino amplitudes, since the $\lambda_1=-\lambda_2$ amplitudes become dominant; see Table \[tb:helamp\_aa\].
Finally, we note that collider signatures of sgoldstinos have been studied in [@Dicus:1990su; @Perazzi:2000id; @Perazzi:2000ty; @Gorbunov:2000th; @Gorbunov:2000ht; @Abreu:2000ij; @Checchia:2001gd; @Gorbunov:2001pd; @Gorbunov:2002er; @Demidov:2004qt; @Petersson:2011in; @Bellazzini:2012mh], and our model file can be also applied for such sgoldstino phenomenology.
[^1]: e-mail: kentarou.mawatari@vub.ac.be
[^2]: A similar light-gravitino scenario has been studied in the monojet plus missing energy signature ($j+{\slashed{E}}$) at the Tevatron [@Affolder:2000ef] and the LHC [@ATLAS:2012zim].
[^3]: The covariant derivative is defined as $D_{\mu}=\del_{\mu}+ig_eQA_{\mu}$.
[^4]: Note that we follow the [FeynRules]{} convention for chiral superfields $\Phi(y,\theta)=\phi(y)+\sqrt{2}\,\theta\cdot\psi(y)-\theta\cdot\theta\,F(y)$ [@Alloul:2013bka], which fixes the sign of the Lagrangian so as to give a positive contribution to the scalar potential.
[^5]: The monophoton signal of ${\tilde{\chi}}{\tilde G}$ production via the Higgs decay at the LHC was studied in [@Petersson:2012dp].
[^6]: Astrophysics observables, e.g. energy losses of red giant stars [@Fukugita:1982eq] and supernova [@Dicus:1997sw] can also provide the lower limit on the gravitino mass. But their limits are less stringent.
[^7]: As discussed in Sect. \[sec:grav\_pair\_prod\], reliability of the effective theory calculation can also constrain the model parameter space.
[^8]: We note that sgoldstinos with masses much smaller than the selectron mass do not obey a naturalness criterion [@Brignole:1998uu].
[^9]: $|P_{e^-}|>0.8$ and $|P_{e^+}|>0.5$ are designed at the International Linear Collider (ILC) [@BrauJames:2007aa].
[^10]: See e.g. Appendix A in [@Mawatari:2012ui] for the general case of the neutralino mixing.
[^11]: The amplitudes were calculated by using the explicit spin-3/2 gravitino wavefunction analytically in the high-energy limit in [@Bhattacharya:1988ey; @Bhattacharya:1988zp] and numerically in [@Christensen:2013aua], including the spin-2 graviton exchange diagram.
|
---
abstract: 'The quartet condensation model (QCM) is extended for the treatment of isovector and isoscalar pairing in odd-odd N=Z nuclei. In the extended QCM approach the lowest states of isospin T=1 and T=0 in odd-odd nuclei are described variationally by trial functions composed by a proton-neutron pair appended to a condensate of 4-body operators. The latter are taken as a linear superposition of an isovector quartet, built by two isovector pairs coupled to the total isospin T=0, and two collective isoscalar pairs. In all pairs the nucleons are distributed in time-reversed single-particle states of axial symmetry. The accuracy of the trial functions is tested for realistic pairing Hamiltonians and odd-odd N=Z nuclei with the valence nucleons moving above the cores $^{16}$O, $^{40}$Ca and $^{100}$Sn. It is shown that the extended QCM approach is able to predict with high accuracy the energies of the lowest T=0 and T=1 states. The present calculations indicate that in these states the isovector and the isoscalar pairing correlations coexist together, with the former playing a dominant role.'
author:
- 'D. Negrea and N. Sandulescu'
- 'D. Gambacurta'
title: 'Isovector and isoscalar pairing in odd-odd $N=Z$ nuclei within a quartet approach'
---
introduction
============
Many experimental and theoretical studies have been dedicated lately to the role played by the isoscalar and isovector proton-neutron (pn) pairing in odd-odd N=Z nuclei (e.g., see [@fm; @sagawa_review] and the references quoted therein). The experimental data show that the ground states of odd-odd N=Z nuclei have the isospin T=0 for $A < 34$ and, with some exceptions, the isospin T=1 for heavier nuclei. This fact is sometimes considered as an indication of the dominant role of isoscalar (T=0) pn pairing in light N=Z nuclei. The fingerprints of T=0 pn pairing in odd-odd N=Z nuclei is also investigated lately in relation to the Gamow-Teller (GT) charge-exchange reactions. Thus in some odd-odd N=Z nuclei there is an enhancement of the GT strength in the low-energy region which appears to be sensitive to the T=0 pn interaction [@fujita]. The competition between the isovector and isoscalar pairing in odd-odd nuclei was also discussed extensively in relation to the odd-even mass difference along N=Z line [@vogel; @macchiavelli].
On theoretical side, the role of pn pairing in odd-odd N=Z nuclei is still not clear. A fair description of low-lying states and GT transitions in odd-odd N=Z nuclei is given by the shell model (SM) calculations (e.g., see [@sm_gt]). However, due to the complicated structure of the SM wave function, from these calculations it is not easy to draw conclusions on the role played by the pn pairing. Recently, the effect of T=0 and T=1 pairing forces on the spectroscopic properties of odd-odd N=Z nuclei was analyzed in the framework of a simple three-body model in which the odd pn pair is supposed to move on the top of a closed even-even core [@sagawa_odd]. This model gives good results for the nuclei in which the core can be considered as inert, such as $^{18}$F and $^{42}$Sc, but not for the nuclei in which the core degrees of freedom are important.
The difficulties mentioned above point to the need of new microscopic models which, on one hand, to be able to describe reasonably well the spectroscopic properties of odd-odd N=Z nuclei, and, on the other hand, to be simple enough for understanding the impact of pn pairing correlations on physical observables. As an alternative, in this article we shall use the framework of the quartet condensation model (QCM) we have proposed in Ref. [@qcm_def]. Its advantage is the explicit treatment of the pairing correlations in the wave function and, compared to other pairing models, the exact conservation of particle number and isospin. The scope of this study is to extend the QCM approach of Ref. [@qcm_def], applied previously to even-even nuclei, for the case of odd-odd N=Z nuclei and to study, for these nuclei, the role played by proton-neutron pairing in the lowest T=0 and T=1 states.
Formalism
=========
In the present study the isovector and isoscalar pairing correlations in odd-odd N=Z nuclei are described by pairing forces which act on pairs of nucleons moving in time-reversed single-particle states generated by axially-deformed mean fields. The corresponding Hamiltonian is given by = \_[i,=1/2]{} \_[i]{} N\_[i]{} + \_[i,j]{} V\^[T=1]{}(i,j) \_[t=-1,0,1]{} P\^+\_[i,t]{} P\_[j,t]{} + \_[i,j]{} V\^[T=0]{}(i,j) D\^+\_[i,0]{} D\_[j,0]{}.In the first term $\varepsilon_{i\tau}$ are the single-particle energies for the neutrons ($\tau=1/2$) and protons ($\tau=-1/2$) while $N_{i\tau}$ are the particle number operators. The second term is the isovector pairing interaction expressed by the pair operators $P^+_{i,0}=(\nu^+_i \pi^+_{\bar{i}} + \pi^+_i \nu^+_{\bar{i}})/\sqrt{2}$, $P^+_{i,1}=\nu^+_i \nu^+_{\bar{i}}$ and $P^+_{i,-1}=\pi^+_i \pi^+_{\bar{i}}$, where $\nu^+_i$ and $\pi^+_i$ are creation operators for neutrons and protons in the state $i$. The last term is the isoscalar pairing interaction represented by the operators $D^+_{i,0}=(\nu^+_i \pi^+_{\bar{i}} - \pi^+_i \nu^+_{\bar{i}})/\sqrt{2}$ which creates a non-collective isoscalar pair in the time reversed states $(i,\bar{i})$. In the applications considered in the present paper the single-particle states have axial symmetry.
The Hamiltonian (1) was employed recently to study the isovector and isoscalar pairing correlations in even-even N=Z nuclei in the framework of QCM approach [@qcm_def]. This approach is extended here for the case of odd-odd nuclei. For consistency reason we start by presenting shortly the QCM approach for even-even nuclei.
In the QCM approach the ground state of the Hamiltonian (1) for a system of N neutrons and Z protons, with N=Z=even, moving above a closed core $|0 \rangle$ is described by the ansatz $$| QCM \rangle =(A^+ + (\Delta^+_0)^2)^{n_q} |0 \rangle,$$ where $n_q=(N+Z)/4$. The operator $A^+$ is the collective quartet defined by a superposition of two non-collective isovector pairs coupled to total isospin T=0 and has the expression A\^+ = \_[i,j]{} x\_[ij]{} \[P\^+\_i P\^+\_j\]\^[T=0]{}. Supposing that the mixing amplitudes $x_{ij}$ are separable, that is $x_{ij}=x_i x_j$, the collective quartet gets the form A\^+= 2 \^+\_1 \^+\_[-1]{} - (\^+\_0)\^2, where $\Gamma^+_{t}= \sum_i x_i P^+_{i,t}$ are the collective neutron-neutron (t=1), proton-proton (t=-1) and proton-neutron (t=0) isovector pairs. Finally, in Eq. (2) the operator $\Delta^+_0$ is the collective isoscalar pair defined by \^+\_[0]{}= \_i y\_i D\^+\_[i,0]{}.
When the single-particle states are degenerate and the strength of the two pairing forces are equal, the QCM state (2) is the exact solution of the Hamiltonian (1). For realistic single-particle spectra and realistic pairing interactions the QCM state (2) is not anymore the exact solution but, as shown in Ref. [@qcm_def], it predicts with high accuracy the pairing correlations in even-even N=Z nuclei.
In what follows we extend the QCM approach to odd-odd N=Z systems. The main assumption, suggested by the exact solution of the Hamiltonian (1) (see below), is that the lowest T=1 and T=0 states in odd-odd nuclei can be well described variationally by trial states obtained by appending to the QCM function (2) a proton-neutron pair. Since the isospin of the QCM state (2) is T=0, the total isospin of the odd-odd system is given by the isospin of the appended pair. Thus, the ansatz for the lowest T=1 state of the odd-odd N=Z systems is $$| iv;QCM \rangle = \tilde{\Gamma}^+_0 (A^+ + (\Delta^+_0)^2)^{n_q} |0 \rangle,$$ where $\tilde{\Gamma}^+_0=\sum_i z_i P^+_{i,0}$ is the isovector pn pair attached to the the even-even part of the state (in what follows we shall use the name “core” for the even-even part of the state (6), which should be not confused with the closed core $|0 \rangle$). It can be seen that this pair has a different collectivity compared to the isovector pn pair $\Gamma^+_0$ contained in the quartet $A^+$ (see Eq. 4).
Likewise, the lowest T=0 state of odd-odd N=Z systems is described by the function $$| is;QCM \rangle = \tilde{\Delta}^+_0 (A^+ + (\Delta^+_0)^2)^{n_q} |0 \rangle,$$ where $\tilde{\Delta}^+_0=\sum_i z_i D^+_{i,0}$ is the odd isoscalar pair, which has also a different structure compared to the isoscalar pair $\Delta^+_0$ which enters in the even-even core. Due to its different isospin, the state (7) is orthogonal to the isovector state (6).
We have proved that the states (6,7) are the exact eigenfunctions of the Hamiltonian (1) when the single-particle energies are degenerate and when the pairing forces have the same strength, i.e., $V^{T=1}(i,j) = V^{T=0}(i,j)=g$. In this case the states (6,7) have the same energy which, for $\epsilon_i=0$, is given by E(n\_q,)=(-2n\_q)g+2n\_q(-n\_q+2)g, where $n_q$ is the number of quartets and $\nu$ is the number of single-particle levels. In Eq. (8) the second term corresponds to the energy of the even-even core of the functions (6,7). It is worth to be mentioned that this exact solution is not the exact solution of the SU(4) model [@Dobes] because in the Hamiltonian (1) the isoscalar force contains only pairs in time-reversed single-particle states.
For a non-degenerate single-particle spectrum and general pairing forces the QCM states (6,7) are determined variationally. The variational parameters are the amplitudes $x_i$, $y_i$ and $z_i$ which are defining, respectively, the isovector pairs $\Gamma^+_t$, the isoscalar pair $\Delta^+_0$ and the odd pn pair. They are found by minimizing the average of Hamiltonian (1) on the QCM states (6,7) and by imposing, for the latter, the normalization condition. The average of the Hamiltonian and the norm of the QCM states are calculated using the technique of reccurence relations. More precisely, the calculations are performed using auxiliary states composed by products of collective pairs. Thus, for the isovector T=1 state (6) the auxiliary states are | n\_1 n\_2 n\_3 n\_4 n\_5 = (\_1\^+)\^[n\_1]{} (\_[-1]{}\^+)\^[n\_2]{} (\_0\^+)\^[n\_3]{} (\_0\^+)\^[n\_4]{} (\_0\^+)\^[n\_5]{} |0. The auxiliary states for the calculations of isoscalar T=0 state (7) have a similar structure with the difference that the odd isovector pair $\tilde{\Gamma}_0^+$ is replaced by the odd isoscalar pair $\tilde{\Delta}_0^+$. It can be observed that the QCM states (6,7) can be expressed in terms of a subset of auxiliary states corresponding to specific combinations of $n_i$. However, in order to close the recurrence relations one needs to evaluate the matrix elements of the Hamiltonian (1) for all auxiliary states which satisfy the conditions $\sum_i n_i=(N+Z)/2$ and $n_5=0,1$. An example of recurrence relations, for the case of even-even systems, can be seen in Refs. [@qcm_iv; @thesis].
The advantage of the QCM approach is the possibility to investigate in a direct manner the role of various types of correlations by simply switching them on and off in the structure of the states (6,7). Thus, in order to explore the importance of isoscalar pairing on the lowest T=0 and T=1 states in odd-odd N=Z systems one can remove from the functions (6,7) the isoscalar pair $\Delta^+_0$. In this approximation the functions get the expressions $$| is;Q_{iv} \rangle = \tilde{\Delta}^+_0 (A^+)^{n_q} |0 \rangle,$$ $$| iv;Q_{iv} \rangle = \tilde{\Gamma}^+_0 (A^+)^{n_q} |0 \rangle.$$ Alternatively, we can estimate the importance of the isovector pairing by removing from the QCM functions the isovector quartet $A^+$. The corresponding functions are $$| C_{is} \rangle = (\Delta^+_0)^{2n_q+1} |0 \rangle,$$ $$| iv; C_{is} \rangle = \tilde{\Gamma}^+_0 (\Delta^{+2}_0)^{n_q} |0 \rangle.$$ Another possibility is to remove from the QCM functions the contribution of like-particle pairs, keeping only the isovector and isoscalar pn pairs. These trial states, which can be employed to study the role of like-particle pairing in N=Z nuclei, have the expressions $$| is;C_{iv} \rangle = \tilde{\Delta}^+_0 (\Gamma^{+2}_0)^{n_q} |0 \rangle,$$ $$| C_{iv} \rangle = (\Gamma^+_0)^{2n_q+1} |0 \rangle.$$ Contrary to the previous approximations, the states (14,15) have not a well-defined isospin.
Among the approximations mentioned above of special interest are the ones corresponding to the states (12) and (15), which are pure condensates of isoscalar and, respectively, isovector pn pairs. These states are sometimes considered as representative for understanding the competition between isovector and isoscalar proton-neutron paring in nuclei.
The QCM states (6,7) and all the approximations based on them are formulated here in the intrinsic system associated to the axially deformed single-particle levels. Therefore they have a well-defined projection of the angular momentum on z-axis but not a well-defined angular momentum. A more complicated quartet formalism for odd-odd nuclei, which conserves exactly the angular momentum and takes into account the correlations induced by a general two-body force, was proposed recently in Ref. [@sasa_odd].
Results
=======
To test the accuracy of the QCM approach for odd-odd N=Z nuclei we consider nuclei having protons and neutrons outside the closed cores $^{16}$O, $^{40}$Ca and $^{100}$Sn. The calculations are performed employing for the pairing forces and the single-particle states a similar input as in our previous study for even-even nuclei [@qcm_def]. Thus, the single-particle states are generated by axially deformed mean fields calculated with the Skyrme-HF code $ev8$ [@ev8] and with the force $Sly4$ [@sly4]. In the mean field calculations the Coulomb interaction is switched off, so the single-particle energies for protons and neutrons are the same. For the pairing forces we use a zero range delta interaction $V^T(r_1,r_2)=V_0^T\delta(r_1-r_2)
\hat{P}^T_{S,S_z}$, where $\hat{P}^T_{S,S_z}$ is the projection operator on the spin of the pairs, i.e., $S=0$ for the isovector (T=1) force and $S=1,S_z=0$ for the isoscalar (T=0) force. The matrix elements of the pairing forces are calculated using the single-particle wave functions generated by the Skyrme-HF calculations (for details, see [@gambacurta]). As parameters we use the strength of the isovector force, denoted by $V_0$, and the scaling factor $w$ which defines the strength of the isoscalar force, $V_0^{T=0}=w V_0$. How to fix these parameters is not a simple task. Since the main goal of this study is to test the accuracy of the QCM approach, we have made several calculations with various strengths, $V_0=\{300, 465, 720, 1000\}$ MeV fm$^{-3}$, which cover all possible situations, from the weak to the strong pairing regime. Because the conclusions relevant for this study are quite similar for all these strengths, here we are presenting only the results for the pairing strength $V_0=465$ MeV fm$^{-3}$ employed in our previous study of even-even nuclei [@qcm_def].
For the scaling factor $w$ we also used various values, $w=\{1.0,1.3,1.5,1.6\}$. To find the most appropriate value of $w$ for the strength $V_0$= 465 MeV fm$^{-3}$ we have searched for the best agreement with the energy difference between the first excited state and the ground state of odd-odd nuclei. These energy differences are shown in Fig. 1 by black squares. It is worth mentioning that the lowest T=0 state can have various angular momenta $J \ge1$ (e.g., the ground states of $^{22}$Na and $^{26}$Al have $J=3$ and, respectively, $J=5$). The theoretical results shown in Fig. 1 corresponds to the exact diagonalization of Hamiltonian (1) in a space spanned by 10 single-particle levels above the cores $^{16}$O and $^{40}$Ca. The best agreement with the experimental data is obtained by choosing $w=1.6$ for $sd$-shell nuclei and $w=1.0$ for $pf$-shell nuclei. As seen in Fig. 1, for these parameters the calculations predict rather well how the isospin of the ground state is changing with the mass number. Since for the nuclei above $^{100}$Sn there are no available experimental data on low-lying states which to be used for fixing the scaling factor $w$, in the calculations presented below we have chosen for $w$ the same value as for the pf-shell nuclei. In Fig. 1 we show also the results obtained considering only the isovector pairing force, that is, for $w=0.0$. It can be seen that in this case the predictions are quite far from the data, especially for the $sd$-shell nuclei.
Exact $\vert QCM \rangle$ $\vert iv;QCM_{iv} \rangle/ \vert is;QCM_{iv} \rangle$ $\vert iv;C_{is} \rangle/ \vert C_{is} \rangle$ $\vert C_{iv} \rangle/ \vert is;C_{iv} \rangle$
------------ ----- ------- --------------------- -------------------------------------------------------- ------------------------------------------------- -------------------------------------------------
$^{ 22}$Na T=0 13.87 13.87 (0.00%) 13.86 (0.07%) 13.85 (0.12%) 13.85 (0.15%)
$$ T=1 13.23 13.23 (0.03%) 13.22 (0.05%) 12.97 (1.97%) 13.22 (0.11%)
$^{ 26}$Al T=0 22.06 22.05 (0.03%) 22.04 (0.07%) 21.94 (0.53%) 21.79 (1.24%)
$$ T=1 21.07 21.06 (0.02%) 21.05 (0.07%) 20.93 (0.66%) 20.98 (0.41%)
$^{ 30}$P T=0 12.66 12.60 (0.44%) 12.55 (0.86%) 11.96 (5.86%) 11.94 (5.95%)
$$ T=1 11.72 11.66 (0.44%) 11.62 (0.82%) 10.94 (7.11%) 10.96 (6.94%)
$^{ 46}$V T=1 7.92 7.92 (0.04%) 7.91 (0.10%) 7.33 (8.11%) 7.76 (2.11%)
$$ T=0 6.93 6.93 (0.01%) 6.93 (0.07%) 6.73 (2.99%) 6.79 (2.05%)
$^{ 50}$Mn T=1 12.77 12.76 (0.07%) 12.75 (0.14%) 12.52 (2.02%) 12.62 (1.22%)
$$ T=0 12.37 12.36 (0.04%) 12.34 (0.24%) 12.18 (1.61%) 12.19 (1.48%)
$^{ 54}$Co T=1 16.14 16.12 (0.14%) 16.09 (0.28%) 15.67 (3.01%) 15.86 (1.78%)
$$ T=0 15.93 15.92 (0.04%) 15.89 (0.22%) 15.53 (2.56%) 15.66 (1.73%)
$^{106}$I T=1 5.15 5.14 (0.08%) 5.13 (0.23%) 4.71 (9.37%) 4.93 (4.51%)
$$ T=0 4.53 4.52 (0.04%) 4.51 (0.42%) 4.19 (7.84%) 4.29 (5.53%)
$^{110}$Cs T=1 8.03 7.98 (0.56%) 7.97 (0.75%) 7.16 (12.14%) 7.59 (5.86%)
$$ T=0 7.09 7.06 (0.45%) 7.04 (0.80%) 6.47 (9.64%) 6.65 (6.77%)
$^{114}$La T=1 9.76 9.72 (0.36%) 9.69 (0.73%) 8.79 (11.03%) 9.27 (5.23%)
$$ T=0 8.95 8.93 (0.28%) 8.92 (0.42%) 8.31 (7.74%) 8.51 (5.18%)
: Correlation energies, in MeV, for the lowest T=1 and T=0 states. In the brackets are given the errors relative to the exact values indicated in the 3rd column. Are shown the results corresponding to the QCM states (6,7) and to the approximations defined by Eqs. (10-15).
With the parameters of the Hamiltonian fixed as explained above, we have studied how accurate are the energies of the lowest T=0 and T=1 states predicted by the extended QCM approach for the odd-odd nuclei. The results are presented in Table I. Are shown the correlation energies defined as $E_{corr}=E_0-E$, where $E$ is the total energy while $E_0$ is the non-interacting energy obtained by switching off the pairing interactions. The correlation energies predicted by the QCM functions (6,7) are given in the 4th column. In the brackets are indicated the errors relative to the exact energies shown in the 3rd column. It can be observed that for all the states and nuclei shown in Table I the errors are small, under $1\%$. We can thus conclude that the QCM functions (6,7) provide an accurate description of the lowest T=0 and T=1 states of the Hamiltonian (1).
One of the advantages of the QCM approach is the opportunity to study the relevance of various types of pairing correlations directly through the structure of the trial states (6,7). As discussed in the previous Section, this is possible by using the approximations (10-15). The correlation energies corresponding to these approximations are shown in Table I. In brackets are given the errors relative to the exact results. One can observe that the smallest errors correspond to the approximations (10,11) in which the contribution of the isoscalar pairs in the even-even core of the QCM functions is neglected. It can be seen that, compared to the calculations with the full QCM functions, in these approximations the errors are increasing by 2-3 times for T=1 states and by larger factors for some T=0 states. However, all the errors relative to the exact results remain under $1\%$.
In column 6 are shown the results corresponding to the approximations (12,13) in which the isovector quartet is taken out from the even-even core. We can see that in this case the errors are much bigger than in the case when the isoscalar pairs are neglected. In the last column are given the results of approximations (14,15) obtained by neglecting in the QCM states the contribution of the like-particle pairs. It can be noticed that for all nuclei the states T=1 are better described by a condensate of isosvector pn pairs rather than by the approximation (13). On the other hand, the ground T=0 states of $sd$-shell nuclei are slightly better described by a condensate of isoscalar pn pairs rather than the approximation (14). However, the latter approximation is by far better than the former in the case of excited T=0 states of $pf$-shell nuclei and nuclei with $A > 100$.
Overall, these calculations show that the T=0 and T=1 states cannot be well described as pure condensates of isoscalar and, respectively, isovector pairs. In general, by neglecting the contribution of like-particle pairs are generated large errors. The best approximation, for both T=0 and T=1 states, is the one in which the odd pn pair is appended to a condensate of isovector quartets. This fact indicates that the 4-body quartet correlations play an important role in odd-odd N=Z nuclei. As demonstrated in [@qcm_iv], these correlations are missed when the condensate of isovector quartets is replaced by products of pair condensates.
For understanding better how the different pairing modes are contributing to the total energy, in Fig.2 are shown the isovector and isoscalar pairing energies for the ground states of $sd$ and $pf$ nuclei. The pairing energies are calculated by averaging the corresponding pairing forces on the QCM functions (6,7). It is important to be observed that the pairing energies for T=1 (T=0) states include also contributions from the isoscalar (isovector) pairing correlations, a fact which is coming from the mixing of isovector and isoscalar degrees of freedom through the even-even core of the QCM functions.
In the left panel of Fig. 2 are plotted the pairing energies in the ground T=0 states of $sd$-shell nuclei. As a reference is shown the pairing energy $E_{pn}^{T=0}$ for $^{18}$F, which corresponds to one T=0 pair above $^{16}$O. It can be seen that the curves for $E_{pn}^{T=0}$ and $E_{pn}^{T=1}$ are almost parallel. This indicates that the extra pairing energy in the T=0 channel for $A>18$ is related mainly to the contribution of the odd pn T=0 pairs. It is also worth noticing that the total pairing energy in the T=1 channel contains also the contribution from the proton-proton (pp) and neutron-neutron (nn) paring energies, which, due to the isospin symmetry, are equal to the pn T=1 pairing energy. Therefore, the total isovector pairing energy is comparable to the isoscalar pairing energy, although the latter contains in addition a large contribution from the extra odd T=0 pair.
In the right panel of Fig. 2 are plotted the pairing energies for the T=1 ground states of $pf$-shell nuclei. It can be seen that $E^{T=0}_{pn}$ is smaller than $E^{T=1}_{pn}$ and also smaller than the like-particle pairing energy. At variance with what seen in the left panel, the energy difference $E^{T=1}_{pn}$ -$E^{T=0}_{pn}$ for $A>42$ is much larger than the energy of the odd pn T=1 pair in $^{42}$Sc. Therefore, the larger pn pairing energy in the isovector channel cannot be related only to the extra pn T=1 pair attached to the even-even core. This fact can be traced back to the strong increase of $E^{T=1}_{pn}$ from A=42 to A=46. This increase is mainly related to the contribution, in the nucleus A=46, of the two pn T=1 pairs from the isovector quartet. Since in the isovector quartet all T=1 pairs have the same structure, the pairing energy of these pn T=1 pairs are equal to the pairing energies of like-particle pairs, which, as seen in A=46, are large, even larger than the energy of the odd pair.
------- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
T=1 T=0 T=1 T=0 T=1 T=0 T=1 T=0 T=1 T=0 T=1 T=0
$K_x$ 1.25 1.92 3.05 3.05 1.47 1.41 2.37 2.36 1.64 1.66 3.18 3.09
$K_y$ 1.97 1.31 1.89 1.56 2.39 1.33 1.72 1.25 2.24 1.88 1.16 1.24
$K_z$ 2.77 1.63 2.82 1.65 1.99 1.09 2.30 1.63 2.34 1.29 4.09 1.33
------- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
: Schmidt numbers for the proton-neutron pairs in the lowest T=1 and T=0 states of various odd-odd N=Z nuclei. By $K_x$ and $K_y$ are denoted the Schmidt numbers for the pairs $\Gamma^+_0$ and $\Delta^+_0$ while $K_z$ is the Schmidt number for the odd pair, i.e., $\tilde{\Gamma}^+_0$ for T=1 states and $\tilde{\Delta}^+_0$ for T=0 states.
The T=0 states in odd-odd N=Z nuclei are often described as states having a two quasiparticle structure. Thus, to evaluate the energies of T=0 states it is commonly employed the blocking procedure, which means that the odd T=0 pair is not considered as a collective pair in which the nucleons are scattered on nearby single particle levels but just as a proton and a neutron sitting on a single level. In what follows we are going to examine the validity of this approximation in the framework of the QCM approach. In order to analyze this issue, we need an working definition for the collectivity of a pair. Here we shall use the so-called Schmidt number, which is commonly employed to analyze the entanglement of composite systems formed by two parts [@Law]. In the case of a pair operator $\Gamma^+=\sum_{i=1}^{n_s} w_i a^+_i a^+_{\bar{i}}$ the Schmidt number has the expression $K=(\sum_i {\omega_i}^2)^2 / \sum_i {\omega_i}^4 $ (for an application of K to like-particle pairing see Ref. [@sandulescu_bertsch]). When there is no entanglement K=1 while when the entanglement is maximum, which means equal occupancy of all available states, $K=n_s$, where $n_s$ is the number of states. As examples, in Table II we show for some nuclei the Schmidt numbers corresponding to the pairs which compose the QCM states (6,7). In Table II by $K_x$ and $K_y$ are denoted the Schmidt numbers associated to the isovector pair $\Gamma^+_0$ and, respectively, to the isoscalar pair $\Delta^+_0$. Since in the isovector quartet $A^+$ all the isovector pairs have the same structure, the like-particle pairs have the Schmidt number $K_x$, as the isovector pn pair. By $K_z$ is denoted the Schmidt number for the odd pair, i.e., $\tilde{\Gamma}_0^+$ for T=1 state and $\tilde{\Delta}^+_0$ for the T=0 state. We recall that the T=0 state is the ground state for $^{30}$P and excited state for $^{54}$Co and $^{114}$La.
From Table II it can be observed that the T=0 pairs are less collective than the isovector T=1 pairs, which is in agreement with the stronger T=1 pairing correlations emerging from the results shown in Table I. In particular, the odd T=0 pair is less collective than the odd T=1 pair. However, in all nuclei, except $^{50}$Mn, the collectivity of odd T=0 pair is significant and comparable to the collectivity of T=0 pairs in the even-even core of the QCM states. Therefore these calculations indicate that, in general, the T=0 states have not a pure two quasiparticle character.
Summary
=======
In this paper we have studied the role of isovector and isoscalar pairing correlations in the lowest T=1 and T=0 states of odd-odd N=Z nuclei. This study is performed in the framework of the QCM approach, which was extended from the even-even to odd-odd nuclei. In the extended QCM formalism the lowest T=0 and T=1 states of odd-odd self-conjugate nuclei are described by a condensate of quartets to which is appended an isoscalar or an isovector proton-neutron pair. As in Ref. [@qcm_def], the quartets are taken as a linear superposition of an isovector quartet and two collective isoscalar pairs. This model was tested for realistic pairing Hamitonians and for nuclei with valence nucleons moving above the cores $^{16}$O, $^{40}$Ca and $^{100}$Sn. The comparison with exact results shows that the energies of the lowest T=1 and T=0 states can be described with high precision by the QCM approach. Taking advantage of the structure of the QCM functions, we have then analyzed the competition between the isovector and isoscalar pairing correlations and the accuracy of various approximations. This analyze indicates that in the nuclei mentioned above the isoscalar pairing correlations are weaker but they coexist together with the isovector correlations in both T=0 and T=1 states. To describe accurately these states is essential to include the isovector pairing through the isovector quartets, in which the isovector pn pairs are coupled together to like-particle pairs. Any approximations in which the contribution of the like-particle pairing is neglected, including the ones in which the T=1 and T=0 states are described by a condensate of isovector pn pairs and, respectively, by a condensate of isoscalar pn pairs, do not describe accurately the pairing correlations in odd-odd N=Z nuclei.
In the present study the lowest T=0 and T=1 states are calculated in the intrinsic system of the axially deformed mean field and therefore they have not a well-defined angular momentum. The restoration of angular momentum will be treated in a future study.
[**Acknowledgements**]{} 0.2cm N.S. thanks the hospitality of IPN-Orsay, Universite Paris-Sud, where this paper was written. This work was supported by the Romanian National Authority for Scientific Research through the grants PN-III-P4-ID-PCE-2016-048, PN 16420101/2016 and 5/5.2/FAIR-RO.
[99]{}
S. Frauendorf, A.O. Macchiavelli, Prog. Part. Nucl. Phys. [**78**]{}, 24 (2014). H. Sagawa, C.L. Bai, G. Colò, Phys. Scr. [**91**]{}, 083011 (2016). Y. Fujita et al., Phys. Rev. Lett. [**112**]{}, 112502 (2014). A.O. Macchiavelli et al, Phys. Rev. C [**61**]{}, 041303 (R) (2000). P. Vogel, Nucl. Phys. A [**662**]{}, 148 (2000). M. Honma et al, Phys. Rev. C [**69**]{}, 034335 (2004). Y. Tanimura, H. Sagawa, K. Hagino, Prog. Theor. Exp. Phys. [**053D02**]{} , (2014). N. Sandulescu, D. Negrea, D. Gambacurta, Phys. Lett. B [**751**]{}, 348 (2015). J. Dobes, S. Pittel, Phys. Rev. C [**57**]{}, 688 (1998). N. Sandulescu, D. Negrea, J. Dukelsky, C.W. Johnson, Phys. Rev. C [**85**]{}, 061303(R) (2012). D. Negrea, “Proton-neutron pairing correlations in atomic nuclei”, PhD thesis, University of Bucharest and University Paris-Sud, 2013, https://tel.archives-ouvertes.fr/tel-00870588/document. M. Sambataro and N. Sandulescu, Phys. Lett. B [**763**]{}, 151 (2016). P. Bonche, H. Flocard, P. Heenen, Comput. Phys. Commun. [**171**]{}, 49 (2005). E. Chabanat, et al., Nucl. Phys. A [**623**]{}, 710 (1997). D. Gambacurta and D. Lacroix, Phys. Rev. C [**91**]{}, 014308 (2015). National Nuclear Data Center, Brookhaven National Laboratory, http://www.nndc.bnl.gov. C.K. Law, Phys. Rev. A [**71**]{}, 034306 (2005). N. Sandulescu and G. F. Bertsch, Phys. Rev. C [**78**]{}, 064318 (2008).
|
---
abstract: 'We reconstruct a controllable model of a person from a large photo collection that captures his or her [*persona*]{}, i.e., physical appearance and behavior. The ability to operate on unstructured photo collections enables modeling a huge number of people, including celebrities and other well photographed people without requiring them to be scanned. Moreover, we show the ability to drive or [*puppeteer*]{} the captured person B using any other video of a different person A. In this scenario, B acts out the role of person A, but retains his/her own personality and character. Our system is based on a novel combination of 3D face reconstruction, tracking, alignment, and multi-texture modeling, applied to the puppeteering problem. We demonstrate convincing results on a large variety of celebrities derived from Internet imagery and video.'
author:
- |
Supasorn Suwajanakorn Ira Kemelmacher-Shlizerman Steven M. Seitz\
\
University of Washington
bibliography:
- 'puppetpaper\_arxiv.bib'
title: What Makes Kevin Spacey Look Like Kevin Spacey
---
Introduction
============
Related Work
============
Experiments
===========
Discussion {#sec:discussion}
==========
|
---
abstract: 'We revisit the problem of point-contact tunnel junctions involving one-dimensional superconductors and present a simple scheme for computing the full current-voltage characteristics within the framework of the non-equilibrium Keldysh Green function formalism. We address the effects of different pairing symmetries combined with magnetic fields and finite temperatures at arbitrary bias voltages. We discuss extensively the importance of these results for present-day experiments. In particular, we propose ways of measuring the effects found when the two sides of the junction have dissimilar superconducting gaps and when the symmetry of the superconducting states is not the one of spin-singlet pairing. This last point is of relevance for the study of the superconducting state of certain organic materials like the Bechgaard salts and, to some extent, for ruthenium compounds.'
author:
- 'C. J. Bolech'
- 'T. Giamarchi'
date: 'June 6$^{\text{th}}$, 2004'
title: 'Keldysh study of point-contact tunneling between superconductors'
---
Introduction\[sec:intro\]
=========================
The theory of superconductivity by Bardeen, Cooper and Schrieffer (BCS) is one of the most important achievements of condensed matter theory. Some of the most striking consequences of this theory concern the tunneling to and from a superconductor. Indeed, the history of tunneling experiments and applications is strongly linked to that of superconductivity. Not only some of the most crucial experimental verifications of the BCS theory came from tunneling experiments,[@wolf1989] but also some of the most important practical applications of superconductivity involve Josephson tunneling junctions. To describe the manifold of experimental and practical situations, two limiting cases are usually considered: planar interfaces and point contacts. As of latter, point-contact tunneling *per se* acquired renewed relevance with the development of scanning tunneling microscopy (STM),[@quate1986; @binnig1987; @binnig1999] that is today at the forefront of the experimental techniques used to study unconventional superconductors. For STM the tip can be modeled using some idealized geometry. For example the cases of spherical,[@tersoff1983; @tersoff1985] conical[@suderow2000; @suderow2002] and pyramidal[@cuevas1998] tip geometries were considered in the literature (the last two were used to model very close STM contacts). The point-contact approximation is therefore the simplest one for the kind of tunneling processes that take place on STM experiments. Other ways to realize point contacts include the use of break junctions and pressed crossed wires.
The simplest theoretical models used to interpret experiments involving superconducting tunneling are typically based on a simple scattering picture and go generally under the name of *semiconducting band models*.[@nicol1960; @klapwijk1982; @blonder1982; @octavio1983] A more systematic approach is that based on the tunneling Hamiltonian.[@cohen1962; @wilkins1969; @cuevas1996] A large series of recent experiments[@scheer1997; @scheer1998; @ludoph2000; @scheer2001; @rubio2003; @hafner2004] on atomic-size contacts showed impressive agreement with the theory; achieving, some of them, detailed microscopic description of the contacts. The current transport in these systems can be described as taking place through a small number of independent *conduction channels*, each of them well described by a point-contact model. In some experiments even the observation of single-channel transport was possible.
Tunneling can thus be used as a very efficient probe of the properties of the leads. In particular one can expect to use it to determine the symmetry of the superconducting order parameter in the leads. However, the previous theoretical analysis of point-contact tunneling, although efficient in simple cases, are too cumbersome to be easily generalized to more complex cases such as unconventional order parameters at finite temperatures and finite magnetic fields. Simplified semiclassical methods exist,[@millis1988; @cuevas2001; @kopu2004] but they suffer from their own limitations, for instance when one is dealing with anisotropic superconductors.[@kopnin2001] Thus a general and simple microscopic theory of point-contact tunneling was clearly lacking, and is necessary in order to take into account some of the complications of unusual superconductivity. Providing such a theory is the purpose of this work. We use a Keldysh formalism, to be able to compute the full current-voltage characteristics and gain access to the effects of external magnetic fields, potential scattering barriers and finite temperatures on the transport properties of different junctions at arbitrary finite voltages. Contrarily to previous implementations of this technique, using the solution of difference equations,[@cuevas1996] we here obtain and diagonalize the full tunneling action for the point contact tunneling junctions involving normal-metal and superconductor leads. This allows one to easily incorporate complications such as triplet pairing in the leads, finite temperature and finite magnetic field. We explore in particular the physical properties of tunneling systems with leads with triplet paring parameters.
Indeed, although the possibility of having triplet pairing was investigated[@anderson1961; @balian1963] soon after the BCS theory, and such an unconventional scenario was found about the same time in the *p*-wave spin-triplet superfluid state of $^{3}$,[@osheroff1972a; @osheroff1972b; @anderson1975] the quest to identify a *p*-wave charged superfluid proved much more challenging. A class of candidates for triplet pairing, though the evidence is as yet not completely conclusive, are the organic superconductors[@chemicalreview; @ishiguro2002] and the ruthenates.[@rice1995; @baskaran1996] There are also proposals of spin-triplet pairing phases for some heavy fermion superconductors like $_{3}$, but the issue remains more open in those cases.[@joynt2002] The organic compounds are the most interesting for us due to the quasi-one-dimensional nature of their normal phases, and also because there is currently considerable debate on the symmetry of the superconducting phase.[@ISCOM2003; @oh2004; @joo2004] For the ruthenates, on the other hand, the triplet pairing seems already backed up by a considerable amount of experimental evidence.[@rice2001; @mackenzie2003] To sort out this question of the symmetry of the order parameter, tunneling can thus be an invaluable tool. Recently, STM tunneling experiments were used to study the symmetry of the superconducting phase of $_{2}$$_{4}$ and other compounds. No such attempts were as yet made in the case of the quasi-one-dimensional organic salts, but efforts in this direction are on their way. Also recently, preliminary experiments involving junctions between two (different) Bechgaard salts were performed, and they showed a number of puzzling features including a zero-bias conductance peak (anomaly) and zero excess current.[@ha2003] In that context, a precise theory of the particularities of point-contact tunneling involving spin-triplet superconductors, done in the microscopic framework of the tunneling Hamiltonian models, was called for. Given the nature of these systems, it is difficult to perform phase sensitive experiments such as the ones that were, for the cuprates, smoking guns to fix the symmetry of the order parameter. We show, however, that the tunneling spectrum has characteristic features, such as the magnetic field dependence, that can be used to unambiguously determine the order parameter symmetry in these systems. A short account of part of the results of this paper was published previously.[@bolech2004]
The rest of the paper is organized as follows. In Sec. \[sec:model\] we present the model that we use to describe the point-contact junction geometry between either normal or superconducting leads. In Sec. \[sec:lap\] we work out a way of finding the tunneling characteristics using a non-equilibrium (Keldysh) formalism. This allows us to obtain the current-voltage characteristics for arbitrary voltage, temperature or magnetic field, for junctions with either normal, or singlet or triplet superconducting leads. All technical details have been confined to these two sections, while the two remaining sections deal with the physical consequences of our findings. Readers only interested in those can thus safely jump to Sec. \[sec:tunchar\], where the physics of such junctions is discussed in detail. Those results are put in context within different experimental possibilities in Sec. \[sec:experiments\]. In particular we discuss there the possibility of using tunneling experiments to probe the nature of the superconducting pairing in the organic superconductors. In Sec. \[sec:summary\] we close the paper with a general discussion of the implications of our results.
Model of the Point-contact Junction\[sec:model\]
================================================
Using a non-equilibrium Keldysh formalism we calculate the full current-voltage characteristics of different types of tunnel junctions where each side of the junction can be either a normal metal (N), a singlet (S) or a triplet (T) superconductor. We start from a tunneling Hamiltonian formulation, $$\begin{aligned}
H & =H_{1}+H_{2}+H_{\mathrm{tun}}\\
H_{\mathrm{tun}} & =\sum_{\ell,\ell^{\prime},\sigma}t_{\ell\ell^{\prime}%
}~\psi_{\ell\sigma}^{\dagger}\left( 0\right) \psi_{\ell^{\prime}\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( 0\right)\end{aligned}$$ The first two terms describe the two leads of the junction (superconducting or otherwise) and the third one models the tunneling processes in which an electron with spin $\sigma$ hops from lead $\ell^{\prime}$ into lead $\ell$. The tunneling matrix is $$t_{\ell\ell^{\prime}}=%
\begin{pmatrix}
V_{1} & t^{\ast}\\
t & V_{2}%
\end{pmatrix}$$ The diagonal terms, $V_{n}$, are local contact potential terms included for the sake of generality[@affleck2000] and the off-diagonal ones are the tunneling matrix elements taken to be constant consistently with the assumption of a point contact. Since the number of particles in each lead is a conserved quantity in the absence of tunneling, we can define the current as proportional to the rate of change in the relative particle number and write[@cohen1962]$$I=\frac{e}{2}\left\langle \partial_{t}\left( N_{2}-N_{1}\right)
\right\rangle =\frac{e}{2i}\left\langle \left[ H_{\mathrm{tun}},N_{1}%
-N_{2}\right] \right\rangle ~\text{.}\label{eq:currentdef}%$$ Notice that the diagonal part of the tunneling matrix conserves particle numbers and will not contribute to the current.
To model the superconducting leads in calculations intended to capture the main features of point-contact transport on conventional superconductors, very simple models suffice to achieve even quantitative agreement with the experiment. Contrary to the case in some planar junction experiments, dimensionality plays little or no role in the tunneling. Therefore one can use one-dimensional leads to carry out all the standard calculations. The situation becomes more complex in the case of unconventional superconductors, mainly because the anisotropic nature of the pair wave-function has to be taken into account when modeling the leads. The most conspicuous case is that of the cuprate compounds, for which the putative *d*-wave paring cannot be modeled within a single-band one-dimensional lead. On the other hand, the organic superconductors that we are interested in are supposed to have *p*-wave symmetry. Since both *s*-wave and *p*-wave symmetries can be modeled in single-band one-dimensional chains, we can conveniently set up a formalism that encompasses the two cases, as well as the normal state. In the following, we will consider a one-dimensional band with two Fermi points and expand the fermion fields around them in the conventional way,[@giamarchi2004] $$\psi_{\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( x\right) \approx e^{-ik_{F}x}\psi_{L\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( x\right) +e^{ik_{F}x}\psi_{R\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( x\right)$$ thus defining left and right moving fields (lead indexes were omitted here). Using these fields and in the spirit of the BCS theory, we introduce the following four gap functions: $$\Delta_{a}\left( x\right) =\lambda_{a}~\left\langle \alpha~\psi
_{L\bar{\alpha}}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( x\right) ~\sigma_{\alpha\beta}^{a}~\psi_{R\beta}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( x\right) \right\rangle$$ where Greek indexes are summed over, $a=0,\ldots,3$ and $\sigma_{\alpha\beta
}^{0}$ is the identity matrix while the other three are the usual Pauli matrices. We use the notation $\bar{\alpha}=-\alpha$ with $\alpha\in\left(
\downarrow,\uparrow\right) \equiv\left( -1,+1\right) $.[^1] The constants $\lambda_{a}$ would depend on the details of the microscopic pairing mechanism about which we make no assumptions.[@ISCOM2003] With this definition $\Delta_{0}\left( x\right)
$ is the spin-singlet order parameter, as in conventional superconductors, and the other three functions form a vector of spin-triplet order parameters,[@mackenzie2003] $\vec{\Delta}\left( x\right) =\Delta\left(
x\right) \hat{d}\left( x\right) $. We use the approximation of dropping the spatial dependence in the order parameter and, directly in Fourier space, we write the Hamiltonian for any of the two leads as $$K=\xi_{ck\sigma}\psi_{ck\sigma}^{\dagger}\psi_{ck\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}-\left\{ \Delta_{a}\left[ \psi_{Rk\beta}^{\dagger}~\sigma_{\beta\alpha}%
^{a}~\alpha~\psi_{L\bar{k}\bar{\alpha}}^{\dagger}\right] +\mathrm{h.c.}%
\right\}$$ where $K=H-\mu N$ with $\mu$ the chemical potential of that lead. All the indexes are summed over, in particular $c\in\left( L,R\right) \equiv\left(
-1,+1\right) $ sums over the two possible chiralities and $\xi_{ck\sigma
}=cv_{\mathrm{F}}k-\mu-\sigma h$ are the corresponding linear dispersions, shifted by the inclusion of chemical potential and magnetic field along the $\hat{z}$-axis (for convenience we will take $v_{\mathrm{F}}=1$). This is the natural extension to the triplet case of the usual pairing-approximation Hamiltonian found in BCS theory, we remark that the fact it does not conserve particle number is an artifact of the anomalous mean field approximation behind its derivation and has no bearing in the operator definition of the current.
Local Action Approach\[sec:lap\]
================================
Within the extended-BCS framework, the Hamiltonian remains a quadratic form including ‘anomalous’ terms. To be able to write it down as a canonical quadratic form we introduce the following spinor notation: $$\Psi_{kn\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( \varpi\right) =%
\begin{pmatrix}%
%TCIMACRO{\TeXButton{s}{\phantom{\sigma}}}%
%BeginExpansion
\phantom{\sigma}%
%EndExpansion
~\psi_{Rk\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( \varpi\right) \\
\sigma~\psi_{L\bar{k}\bar{\sigma}}^{\dagger}\left( \bar{\varpi}\right)
\end{pmatrix}
\equiv%
\begin{pmatrix}%
%TCIMACRO{\TeXButton{-}{\phantom{-}}}%
%BeginExpansion
\phantom{-}%
%EndExpansion
\psi_{Rk\uparrow}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( \varpi\right) \\%
%TCIMACRO{\TeXButton{-}{\phantom{-}}}%
%BeginExpansion
\phantom{-}%
%EndExpansion
\psi_{Rk\downarrow}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\left( \varpi\right) \\%
%TCIMACRO{\TeXButton{-}{\phantom{-}}}%
%BeginExpansion
\phantom{-}%
%EndExpansion
\psi_{L\bar{k}\downarrow}^{\dagger}\left( \bar{\varpi}\right) \\
-\psi_{L\bar{k}\uparrow}^{\dagger}\left( \bar{\varpi}\right)
\end{pmatrix}
\label{eqn:spinor}%$$ (that we present directly in Fourier space). Here $k$ is the reduced momentum (after linearization was carried out) and $$\varpi=\omega-\mu\label{eqn:shiftfreq}%$$ is the shifted frequency corresponding to a time evolution given by $K$ (cf. with the discussion of tunneling given in Ref. \[\]); here the bars have the meaning of minus signs. Since we make explicit distinction between chiralities, all the components of the spinor are independent. Using this basis the Hamiltonian can be written in matrix form,$$K_{\mathrm{sc}}=\Psi_{kn\sigma}^{\dagger}\left( \varpi\right)
\begin{bmatrix}
\xi_{k\sigma}\hat{\sigma}_{\sigma\tau}^{0} & -\hat{\Delta}_{\sigma\tau}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\\
-\hat{\Delta}_{\sigma\tau}^{\dagger} & -\xi_{k\bar{\sigma}}\hat{\sigma
}_{\sigma\tau}^{0}%
\end{bmatrix}
_{nm}\Psi_{km\tau}\left( \varpi\right)$$ Here we arranged the different components of the order parameter using the following matrix notation: $$\hat{\Delta}=%
\begin{pmatrix}
\Delta_{\downarrow\uparrow} & \Delta_{\uparrow\uparrow}\\
\Delta_{\downarrow\downarrow} & \Delta_{\uparrow\downarrow}%
\end{pmatrix}
\equiv\Delta_{a}\hat{\sigma}^{a}=%
\begin{pmatrix}
\Delta_{0}+\Delta_{3} & \Delta_{1}-i\Delta_{2}\\
\Delta_{1}+i\Delta_{2} & \Delta_{0}-\Delta_{3}%
\end{pmatrix}$$ We note that another convention, the one introduced in the work of Balian and Werthamer,[@balian1963] is related to ours via $\hat{\Delta}_{\mathrm{BW}%
}=\hat{\Delta}\cdot(i\hat{\sigma}^{y})$; the difference is rooted in a different definition of the spinor basis.
In the case of zero magnetic field, $K^{2}$ is block diagonal and one arrives to a closed solution for the quasiparticle excitation spectrum.[@anderson1975] In the presence of magnetic field the calculations for the case of a general order parameter are more involved. We adopt the convention of taking the quantization axis ($\hat{z}%
$) along the magnetic field direction and consider the cases of triplet order parameters parallel or perpendicular to the field. In both of these cases the Hamiltonian can be diagonalized via a canonical rotation (*i.e.* a Bogoliubov-Valatin transformation) that proceeds in a completely identical way to that of the conventional *s*-wave case. Following the analogy further, the local Green functions for the leads can be written down immediately. For the case of a parallel order parameter (*i.e.* $\Delta_{1}=\Delta_{2}=0$) the non-zero matrix elements of the advanced and retarded Green functions are: $$\begin{aligned}
g_{11}^{r,a} & =g_{33}^{r,a}=\frac{2}{w}\frac{-\left( \varpi+h\pm
i\eta\right) }{\sqrt{\left\vert \Delta_{\downarrow\uparrow}\right\vert
^{2}-\left( \varpi+h\pm i\eta\right) ^{2}}}\\
g_{13}^{r,a} & =\left[ g_{31}^{r,a}\right] =\frac{2}{w}\frac{\Delta
_{\downarrow\uparrow}^{\left[ \ast\right] }}{\sqrt{\left\vert \Delta
_{\downarrow\uparrow}\right\vert ^{2}-\left( \varpi+h\pm i\eta\right) ^{2}}%
}\\
g_{22}^{r,a} & =g_{44}^{r,a}=\frac{2}{w}\frac{-\left( \varpi-h\pm
i\eta\right) }{\sqrt{\left\vert \Delta_{\uparrow\downarrow}\right\vert
^{2}-\left( \varpi-h\pm i\eta\right) ^{2}}}\\
g_{24}^{r,a} & =\left[ g_{42}^{r,a}\right] =\frac{2}{w}\frac{\Delta
_{\uparrow\downarrow}^{\left[ \ast\right] }}{\sqrt{\left\vert \Delta
_{\uparrow\downarrow}\right\vert ^{2}-\left( \varpi-h\pm i\eta\right) ^{2}}}%\end{aligned}$$ with the upper (lower) sign corresponding to the retarded (advanced) ones. Here $w=4v_{\mathrm{F}}$ is an energy scale related to the Fermi velocity (or equivalently to the normal density of states at the Fermi level) and $\eta$ is a positive infinitesimal that regularizes the Green functions (sometimes kept finite to model the inelastic relaxation processes inside the leads). Analogously for the case of a perpendicular order parameter (*i.e.* $\Delta_{0}=\Delta_{3}=0$), the non-zero matrix elements of the advanced and retarded Green functions are this time: $$\begin{aligned}
g_{11}^{r,a} & =g_{44}^{r,a}=\frac{2}{w}\frac{-\left( \varpi\pm
i\eta\right) }{\sqrt{\left\vert \Delta_{\uparrow\uparrow}\right\vert
^{2}-\left( \varpi\pm i\eta\right) ^{2}}}\\
g_{14}^{r,a} & =\left[ g_{41}^{r,a}\right] =\frac{2}{w}\frac{\Delta
_{\uparrow\uparrow}^{\left[ \ast\right] }}{\sqrt{\left\vert \Delta
_{\uparrow\uparrow}\right\vert ^{2}-\left( \varpi\pm i\eta\right) ^{2}}}\\
g_{22}^{r,a} & =g_{33}^{r,a}=\frac{2}{w}\frac{-\left( \varpi\pm
i\eta\right) }{\sqrt{\left\vert \Delta_{\downarrow\downarrow}\right\vert
^{2}-\left( \varpi\pm i\eta\right) ^{2}}}\\
g_{23}^{r,a} & =\left[ g_{32}^{r,a}\right] =\frac{2}{w}\frac{\Delta
_{\downarrow\downarrow}^{\left[ \ast\right] }}{\sqrt{\left\vert
\Delta_{\downarrow\downarrow}\right\vert ^{2}-\left( \varpi\pm i\eta\right)
^{2}}}%\end{aligned}$$ The non-equilibrium formalism that we seek to implement in order to access the full I-V characteristics for arbitrary finite voltages, requires the introduction of one more linearly independent Green function. From the expressions for the retarded and advanced functions and using the assumption of thermal equilibrium of the leads, we can construct immediately the so-called Keldysh component[@keldysh1965] of the local lead Green function: $g_{ij}^{k}=\left( g_{ij}^{r}-g_{ij}^{a}\right) \tanh\left(
\varpi/2T\right) $.
Before proceeding, we stop to comment on the case of two normal leads. The corresponding Green functions are obtained by using any of the two sets above and taking the limit $\Delta_{a}\rightarrow0$ $\forall a$. In this case it is a simple exercise to derive from Eq. (\[eq:currentdef\]) the well-known expression for the conductance of an junction:$$G_{\mathrm{NN}}=\frac{e^{2}}{\pi\hbar}\alpha\quad\text{with\quad}\alpha
=\frac{4t^{2}}{\left( 1+t^{2}\right) ^{2}}~\text{,}%$$ where we reintroduced Plank’s constant and measured $t$ in units of $w$. This expression was first derived by Landauer and later extended and generalized in the works of Büttiker, Imry and others.[@imry1999] The constant $\alpha$ is called the *channel transparency* and takes values in the interval $\left[ 0,1\right] $. Now we return to the case when at least one of the two leads is superconducting.
Given the local Green functions and the tunneling Hamiltonian, the simplest way to proceed in order to compute the characteristics of a junction is to use linear response and perturbation theory.[@wilkins1969; @mahan2000] A more rigorous approach should make use of non-equilibrium Green functions and treat the tunneling term to all orders. This, to be able to calculate the full I-V line and give a quantitative account of its sub-gap structure even in the ballistic limit (*i.e.* for $\alpha\rightarrow1$). One past implementation of this program made a clever use of the non-equilibrium Dyson equations and reduced the problem to the solution of a set of linear recursion relations.[@cuevas1996] Here we do not want to restrict ourselves to the *s*-wave case and to zero temperature and fields, we shall take then a different route. We treat the local action and Green functions directly as matrices in order to gain the convenience of a simpler implementation of multiband multicomponent spinors and deal with them numerically.
We notice that the lead Green functions can be inverted in close analytical form to obtain the corresponding local lead actions. This procedure implies the assumption of fast relaxation rates,[@wolf1989] which is consistent with the point-contact geometry of the junction. Namely, working in a Keldysh-extended Nambu-Eliashberg spinor basis of symmetric and antisymmetric combinations of forward and backward time paths (see Ref. \[\]), the local action for a single lead can be written as $$S_{\ell}=\int\frac{d\omega}{2\pi}\Psi_{\kappa n,\ell}^{\dagger}\left[
\mathcal{A}_{\ell}\right] _{\kappa n,\kappa^{\prime}n^{\prime}}\Psi
_{\kappa^{\prime}n^{\prime},\ell}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}%$$ where $n$ labels the different components of the four-spinors as introduced in Eq. (\[eqn:spinor\]) and $\kappa$ is the index for the two (symmetric and antisymmetric) Keldysh components. The matrix representation of the spinorial action density is given by $$\mathcal{A}_{\ell}\equiv%
\begin{pmatrix}
\hat{0} & \hat{g}^{a}\\
\hat{g}^{r} & \hat{g}^{k}%
\end{pmatrix}
^{-1}=%
\begin{pmatrix}
-\left[ \hat{g}^{r}\right] ^{-1}\hat{g}^{k}\left[ \hat{g}^{a}\right] ^{-1}
& \left[ \hat{g}^{r}\right] ^{-1}\\
\left[ \hat{g}^{a}\right] ^{-1} & \hat{0}%
\end{pmatrix}$$ where $\hat{g}^{r,a,k}$ are the matrices in the four-spinor basis whose nonzero components were given above (for the two orientations of the order parameter that we will consider, the inverses of $\hat{g}^{r,a}$ are easy to calculate in closed form). By combining these actions and the spinorial matrix representation of the tunneling Hamiltonian ($\mathcal{H}_{\mathrm{tun}}$) written in a two-lead Keldysh-extended Nambu-Eliashberg spinor basis, one can ensemble the full non-equilibrium action matrix density for the junction$$\mathcal{A}=\left[ \mathcal{A}_{\ell=1}\oplus\mathcal{A}_{\ell=2}\right]
-\mathcal{H}_{\mathrm{tun}}~\text{.}\label{eqn:fullaction}%$$ While carrying out this construction, special attention must be paid to the fact that the shifted frequencies \[see Eq. (\[eqn:shiftfreq\])\] will have different reference levels when there is a relative bias applied to the leads. Positively- and negatively-shifted-frequencies in each lead are related by the coherent pairing processes in the superconductors; this is reflected on the choice of frequency pairs in the spinor basis. When two superconductors with different chemical potentials are put into contact, the tunneling Hamiltonian connects real (*i.e.* *unshifted*) frequencies. Thus, at finite voltages, pairing and tunneling together create an infinite set of related frequencies that is at the heart of the multiparticle tunneling processes mediated by the so-called Andreev reflections; this is illustrated in Fig. \[andreev\].
\[h\]
[andreev.eps]{}
To each value in the frequency window defined by the chemical potentials in the two leads, one such set of ‘entangled’ frequencies can be assigned. These sets are independent and the action is block diagonal between different ones. Discretizing the frequencies in this window automatically defines a discretization of the ‘whole’ frequency space. We proceed in this way and deal with one such set of frequencies at a time. Since these sets are infinite, we truncate their hierarchies at some distance from the central frequency window. This is equivalent to introducing a *soft* limit in the number of allowed Andreev reflections: up to some fixed number ($N_{\mathrm{A}}$) they are fully taken into account and then they are gradually suppressed until twice that number is reached. This is a natural and consistent cut-off scheme at any finite voltage, since the presence of a growing frequency denominator makes the higher contributions less and less important regardless of the value of $\alpha$. It is also clear what the limitations of the approach are: as the difference in chemical potentials decreases, the denominators grow more and more slowly and a larger number of Andreev reflections is required in order to achieve the same accuracy.
The implementation of the described scheme imports one more complication. In the spinor basis we adopted, only chirality conserving tunneling processes can be written in matrix form. To overcome this problem we introduce a second *mirrored* spinor basis with the chiralities inverted (as in Eq. (\[eqn:spinor\]) but interchanging $R\leftrightarrow L$). Using two copies of the spinor space the full tunneling Hamiltonian (that is, including chirality non-conserving processes) can be written as a matrix and the *whole* frequency space for *both* chiral species is considered (let us stress that no Hilbert space doubling takes place). Inverting the action matrix density thus constructed \[Eq. (\[eqn:fullaction\])\], using standard numerical methods, we can obtain frequency densities for the different current harmonics (constructed out of the Keldysh components of lead-mixing Green functions). Here we will concentrate on the dc component. Finally, the current is computed integrating its density over the full frequency axis,$$I=\frac{et}{2i}\sum_{\sigma}\int\frac{d\omega}{2\pi}\left\langle
\psi_{2,\sigma}^{\dagger}\psi_{1,\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}-\psi_{1,\sigma}^{\dagger}\psi_{2,\sigma}^{%
%TCIMACRO{\TeXButton{t}{\phantom{\dagger}}}%
%BeginExpansion
\phantom{\dagger}%
%EndExpansion
}\right\rangle _{\mathrm{kel}}~\text{.}%$$
The practical implementation of this numerical scheme is straightforward and allows one to consider the (combined) effects of finite temperature, applied magnetic fields, contact potentials in the junction, spin-flip tunneling or spin-flip scattering processes[@grimaldi1997] in the leads. It is also possible to compute the ac response. Here our primary interest is in comparing singlet and triplet superconductor junctions and how they respond differently in the presence of an external field or with temperature; other additional complications will be discussed elsewhere. Even though our numerical scheme is not well suited for studying the limit $V\rightarrow0$, in particular the combination $\alpha\thicksim1$ with $V\thicksim0$ is the computationally most expensive one, that limit can nevertheless be approached analytically. On the other hand, all other regimes can be solved with modest computational effort and the algorithm is quite easily parallelizable. Even more, we shall show that the finite bias features are the ones that might provide useful signatures to further establish the spin-triple scenario.
Tunneling Characteristics\[sec:tunchar\]
========================================
To discuss and compare the I-V characteristics for different types of junctions, we choose some convenient set of parameters that clearly display the different features. For the tunneling overlap integral we choose the values $t=0.2$ and $t=0.5$ (that correspond, in the notation of Ref. \[\], to $\alpha\simeq0.15$ or $Z=2.4$ and to $\alpha=0.64$ or $Z=0.75$, respectively), and when there is a magnetic field we fix its value to $h=0.2$ in units of $\Delta$ (by $\Delta$ we mean the magnitude of the singlet gap, $\Delta_{0}$, or of the triplet vector order parameter depending on the case; notice we absorbed Bohr’s magneton and the gyromagnetic factor in the definition of the magnetic field). These values are larger than, for instance, those in the most typical STM tunneling experiments, except for the ones engineered expressly to seek for large values of $\alpha$,[@scheer1998] but have the virtue of making evident the different features, including the Andreev gap structure (see below). We show, except when indicated, curves for the dc response in the limit of vanishing temperatures. For the truncation procedure we have taken $N_{\mathrm{A}}=3$ and $N_{\mathrm{A}}=5$ for the cases of $t=0.2$ and $t=0.5$, respectively (and verified that larger values produce, given the set of parameters chosen, identical curves). The discretization used on the horizontal axis is better than $\delta
V=0.025$ $\Delta/e$ in all the cases.
\[tb\]
[NSgraph2.eps]{}
We review now the different pairing-symmetry scenarios. Let us start with the case of normal-metal–superconductor junctions. We show in Fig. \[nsgraph2\]-(a) typical curves for an N-S junction (*i.e.* a point-contact junction between a normal metal and a conventional singlet-paring superconductor). The diagonal straight line is the N-N characteristics given as a reference. The solid lines correspond to the N-S junction in zero field and the dashed line is for one of the junctions (the less transparent one) in the presence of a magnetic field. The effect of the magnetic field is to produce what would be seen as a Zeeman splitting of the differential conductance peak (*i.e.* the peak in the curve of $dI/dV$ *vs*. $V$). Notice the sub-gap shoulder on the I-V curve when $eV<\Delta$ (for instance in the zero field case); its origin is in the coherent Andreev processes that take place at the junction contact. Next we show in Fig. \[nsgraph2\]-(b) a typical curve this time for what we call an N-T junction (*i.e.* a junction between a normal metal and an unconventional triplet-pairing superconductor). The solid lines correspond to the N-T junction in zero field and the dashed line is for the $t=0.2$ junction when in the presence of a magnetic field that is aligned with the vector order parameter $\vec{\Delta}$. If one considers a magnetic field that is perpendicular to the order parameter ($\vec{h}\perp\vec{\Delta}$), one finds it has no effect on the I-V characteristic that remains identical to the one for the zero field case (remark that in the case of the N-S junction the orientation of the field was immaterial). Notice also the absence of a sub-gap shoulder on the I-V curve. This absence is caused by the odd real-space symmetry of the superconductor (*p*-wave paring): Andreev processes with opposite chiralities interfere destructively and exactly cancel each other. As a result, the curves are exactly identical to those computed with a semiconducting band model that ignores Andreev scattering (to be contrasted with the non-trivial results in this respect that will be shown momentarily for junctions involving two different-symmetry superconductors).
\[tb\]
[STgraph2.eps]{}
Let us now turn to examine the case of junctions in which both their sides are superconducting. In Fig. \[stgraph2\] we display typical curves for S-S junctions (both the sides are conventional spin-singlet superconductors) and S-T junctions (one of the sides is a spin-triplet superconductor). The straight dotted line is the N-N characteristics – taken as a reference, same as before. The remaining dotted lines are the I-V curves of S-S junctions that show all the standard features already well documented in the literature.[@octavio1983; @cuevas1996] For the purpose of later comparison, we remark here the sizeable currents for voltages $eV>2\Delta$ (the value of the gap is taken to be the same on both sides of the junction), and the ‘sub-gap’ shoulder with Andreev steps at $eV=2\Delta/n$ (with $n=1,2,3,\ldots
$). We also remind the reader that this curve is, when orbital effects can be ignored, not sensitive to applied magnetic fields. The remaining curves (dashed and solid lines) correspond to S-T junctions with different tunneling matrix element strengths and with and without magnetic field (respectively). The solid lines are insensitive to the orientation of the vector order parameter on the triple-pairing side of the junctions, and the current amplitude is found to be systematically smaller than in the case of the respective S-S junctions. Remarkably, the ‘sub-gap’ structure shows only two steps (at voltages given by $n=1,2$) and the current is zero when $eV<\Delta$ (if the magnitudes of the gaps in the spin-singlet and spin-triplet sides of the junction are different, then the zero current condition is $eV<\Delta
_{\mathrm{Triplet}}$, where $\Delta_{\mathrm{Triplet}}$ is the magnitude of the vector order parameter on the spin-triplet side of the junction). Concerning the effects of an applied magnetic field, the curves remain unchanged if the field is applied parallel to the direction of the vector-order-parameter, but show instead a Zeeman effect if the field is perpendicular to it (dashed line). This is in contrast with the case of N-T junctions, for which the Zeeman effect is expected for fields $\vec
{h}\parallel\vec{\Delta}$.
In the figure inset we display curves for S-S junctions at finite temperatures. In order to render the different features simultaneously visible, we ‘push’ the temperature to be equal to $\Delta$ (with $k_{\mathrm{B}}=1$). Let us denote as $\Delta$ the average gap value ($2\Delta=\Delta_{1}+\Delta_{2}$) that we keep using to define the unit in which we measure the voltage, normalized current, etc. The dashed line is the finite temperature I-V for the $t=0.2$ junction between two identical superconductors ($\Delta_{2}=\Delta_{1}$), whereas the solid lines correspond to similar junctions with $\Delta_{2}=3~\Delta_{1}$ or $\Delta_{2}%
=37/3~\Delta_{1}$ and the same transparency (during this discussion we assume $\Delta_{2}\geq\Delta_{1}$). Besides the standard quasiparticle tunneling threshold ($eV=2\Delta$) one clearly sees in both solid curves the step at $eV=\Delta_{2}$ corresponding to the first term of the so-called *even series* ($eV=2\Delta_{\ell}/m$ with $\ell=1,2$ and $m=2,4,\ldots$). The steps corresponding to the *odd series* ($eV=2\Delta/m$ with $m=3,5,\ldots$) are not visible in this plot, but we verified that we observe them at low temperatures in junctions with $\Delta_{2}\gtrsim\Delta_{1}$. This current step structure lacks temperature or magnetic field dependence (in accordance with experimental observations). The other prominent feature of the solid curves is the rounded cusp at $eV\approx\Delta_{2}-\Delta_{1}$. This is a thermally activated feature that appears when the upper gap edges at both sides of the junctions are aligned. The rounding of the cusp has similar origin as the rounding of the quasiparticle threshold, both are due to higher order multiparticle processes. One new observation that we made is that the position of the thermal cusp is not exactly $\Delta_{2}-\Delta_{1}$, but it is shifted towards lower voltages. This is again a result of taking into account higher order Andreev processes and is made more evident by our choice of parameters; for lower temperatures and less transparent junctions this correction is typically very small. We also find that the steps of both the even and the odd series that fall to the left of the thermal cusp are usually washed away by its tail; this we find to be consistent with available experimental results. We expect the different detailed features corresponding to dissimilar gaps and finite temperatures to be accessible to current state-of-the-art experiments, we will comment on that in the next section.
As an aside, let us comment on the effect of local contact potential terms in the tunneling matrix. They suppress uniformly the dc current amplitude but have no other effect on the shape of the I-V characteristics. This is true regardless of the pairing symmetry of the superconductors forming the junction, in particular they do not cause the appearance of mid-gap states in the case of triplet pairing (for neither the N-T or S-T junctions nor for the T-T junctions discussed below).
\[tb\]
[TTgraph2.eps]{}
Finally, the only remaining case to consider is that of junctions in which both sides are spin-triplet superconductors. Such a case is exemplified in the curves of Fig. \[ttgraph2\]. Andreev processes with triplet symmetric paring, for tunneling through a single-mode contact, interfere destructively and the current remains zero up to voltages larger than $eV=2\Delta$, when quasiparticle tunneling becomes allowed. The lower solid line corresponds to $t=0.5$ and the upper one to $t=0.2$, the inversion of the order is due to the fact that, for this range of parameters, the current at fixed voltage grows more slowly than in the case of normal junctions (N-N). Similarly as in the S-S case, both sides of the junction react identically to applied magnetic fields and no net effects are therefore visible in the current-voltage characteristics.
The curves in the figure inset are the same as in the inset of the previous figure but for the case of spin-triple pairing symmetry (on both sides of the junction). Both the quasiparticle tunneling threshold and the thermal cusp remain sharp since higher order processes interfere destructively and no rounding takes place. Another consequence of this is that the position of the thermal cusp is exactly $eV=\Delta_{2}-\Delta_{1}$ and no shift is observed. The sub-gap part of the curves is smooth and its height and shape are governed by the thermal excitations, the step structure of the even and odd series is absent. Also in this case magnetic fields have no direct effects.
Experimental consequences\[sec:experiments\]
============================================
A first set of applications concerns atomic contacts. For the case of normal or singlet superconducting leads with identical gaps, and zero temperature and magnetic field, our results are in full agreement with the previous studies of such systems.[@cuevas1996] For the case of two different gaps shown in the inset of Fig. \[stgraph2\], our theory correctly reproduces the different steps as discussed in the previous section. Another prominent feature of such curves is the thermally activated rounded cusp at $eV\approx\Delta_{2}%
-\Delta_{1}$. The rounding is due to higher order multiparticle processes; and a new prediction is that the position of the thermal cusp is not exactly $\Delta_{2}-\Delta_{1}$, but it is shifted towards lower voltages. Such a shift from the naive density of states answer could in principle be checked directly in atomic contacts. We also find that the steps of both the even and the odd series that fall to the left of the thermal cusp are usually washed away by its tail; this we find to be consistent with available experimental results.[@wolf1989] Such features corresponding to dissimilar gaps and finite temperatures should be accessible to current state-of-the-art experiments. For instance, the experiments of Ref. \[\] could be attempted using a STM tip as before but to prove into a -doped sample. will act as a magnetic impurity and decrease the value of the gap, such a setup would correspond to the situation of similar but not identical gap parameters with a doping controllable difference; it would be a way to try to observe the ‘splitting’ of the even series in single-point contacts. Other setups could be envisaged based also on STM techniques or on pressed crossed wires.
The main application of our results, however, concerns the use of tunneling with triplet superconductors.[@bolech2004] In that case the most direct experimental realization is organic superconductors.[@chemicalreview] The experiments show that for magnetic fields along the direction of the conducting chains ($\mathbf{a}$ crystalline-axis) the upper critical field is paramagnetically limited. If such systems are indeed triplet superconductors, this would correspond, following our notations, to a vector order parameter aligned with the field ($\vec{h}\parallel\vec{\Delta}$).[@lebed2000] With this geometry a Zeeman splitting of the differential conductance peak, similar to that in conventional superconductors, should be observed in a tunneling experiment. As the field is rotated the splitting would be suppressed and for a magnetic field oriented parallel to the $\mathbf{b}^{\prime}$ crystalline-axis there should be no Zeeman effect (accompanied by the possibility of applying large fields that are not paramagnetically limited). The disappearance of splitting even as the field is being increased would constitute a clear signature of spin-triplet superconductivity. The main difficulties of such an experiment would be the set up of point-contacts and the resolution required to observe the Zeeman effect. On the first point one possibility would be to use STM setups with ‘thin tips’. On the second point, since the critical temperature of these organic salts is relatively low, the experiment could be done with moderate fields that would produce splittings that are a substantial fraction of the superconducting gap. The linearity of the magnetic field dependence of these splittings, a signature of the Zeeman effect, could be accurately established using Fourier analysis techniques.[@hertel1985]
Similarly as in the case of N-T junctions, we can envisage using the Zeeman response of S-T junctions as a direct probe for spin-triplet order. If, for instance, a magnetic field is applied along the $\mathbf{b}^{\prime}$ crystalline-axis of $_{2}$$_{6}$, we predict a Zeeman splitting of the main differential conductance peak. This would also constitute a clear sign of unconventional superconductivity since such an effect does not take place for standard BCS superconductors. The $\mathbf{b}^{\prime}$ direction is the one on which the upper critical field is not paramagnetically limited, so relatively large fields could be applied in order to obtain a clear signal (as the field alignment changes the splitting disappears). To afford large fields one would need to use in the ‘conventional’ side of the junction a compound with a relatively high critical temperature (as compared with that of $_{2}$$_{6}%
$). In this respect one has the bonus that, since the required setup should be a point-contact, superconductivity might survive at the contact-neck region up to fields much in excess of the bulk value of $H_{\mathrm{c2}}$ (rather approaching $H_{\mathrm{p}}^{\mathrm{BCS}}$ for that material).[@clogston1962; @suderow2002] Another advantage in the two-superconductor setup is that the levels of noise are usually smaller,[@eskildsen2003] allowing a better definition of the differential conductance signal from where the Zeeman splitting is going to be read off.
Similar considerations could also be made for those layered compounds that are believed to be triplet superconductors.[@mineev2002; @ishiguro2002] In that case the critical magnetic field is not paramagnetically limited when the applied field is oriented parallel to the superconducting planes. Among these compounds $_{2}$$_{4}$ is the best studied one so far, but only few tunneling experiments were performed,[@jin2000; @laube2000; @mao2001; @upward2002; @kugler2003] and none so far with high resolution and in the presence of an applied external magnetic field (see though Refs. \[\]). One of the conspicuous features observed in some of these experiments is the presence of a ‘zero bias anomaly’ (ZBA) in the differential conductance. Its explanation is still a matter of debate, but seems to require extended contact interfaces and momentum dependent order parameters (to include the effect of ‘zero energy states’ at the interfaces, extensions to our scheme would be required, possibly incorporating certain aspects of those calculations already done for planar junctions[@yamashiro1997; @honerkamp1998; @asano2003]). Our general findings about the effect of magnetic fields should, however, apply, since they refer to effects to be measured at voltages of the order of the superconducting gap. It is intriguing to notice that in the ‘point-contact’ experiment of Ref. \[\] two types of spectra are measured: with and without ZBA in the differential resistance. One might speculate that what changes between the different samples should be no other thing than the effective size, potential barrier and geometry of the contact (experiments in other compounds indicate that that might be enough to give rise to zero voltage features; cf. Ref. \[\]). In particular, if that is the case, at least those $dV/dI$ curves with no ZBA should, according to our calculations, show no Zeeman splitting when a field is applied parallel to the planes; in contrast to what is expected for BCS superconductors. Two different groups reported that further point contact and STM tunneling experiments on $_{2}$$_{4}$ in a magnetic field are underway.[@laube2000; @kambara2003]
Summary and Closing Remarks\[sec:summary\]
==========================================
Summarizing, we have shown how the full I-V characteristics for point-contact junctions can be accurately studied using a local action approach in the context of the Keldysh formalism. Our formalism allows one to treat both normal and superconducting (singlet and triplet) leads, and to take into account effects of finite magnetic field and temperature. In particular we have shown that the point-contact tunneling involving unconventional superconductors with spin-triplet pairing displays interesting characteristic features. Unlike the case of conventional superconductors, these show quite different characteristics whether the junctions are planar[@sengupta2001] or point-contact like.[@bolech2004] The Zeeman response to an external magnetic field is such that it allows for the identification of triplet phases and might be relevant for future experiments. The prediction of a truncated sub-gap structure in point-contact S-T junctions is also very interesting, but experiments to test this are much harder to carry out. That kind of detailed experiments are, however, possible for conventional superconductors and we believe they could be exploited to look for some as yet poorly tested predictions of the theory. For instance, the experiments of Ref. \[\] could be attempted using a STM tip as before but doping the sample with ; it would be a way to try to observe the ‘splitting’ of the even series in single-point contacts.
Besides the different additional effects on the tunneling characteristic that we discussed here (the effects of fields, temperature and contact potentials), there are others that can also be easily taken into account like, for instance, spin-flip tunneling processes or temperature gradients. These effects will be relevant in the study of junctions involving ferromagnets (of possible relevance in the context of spintronics; cf. Ref. \[\]) or in precision studies related to the renewed interest in the use of microjunctions for *on-chip* thermometry and eventually cryogenics.[@pekola2004] A remaining challenge, however, is how to extend our formalism seeking to include the physics responsible for producing ZBAs in order to further our understanding of tunneling experiments in layered unconventional superconductors and planar junctions. Such extensions shall seek to describe not only the point-contact limit, but also the planar interface case. This is one of the ingredients necessary for a realistic description of layered materials like the ruthenates. In that respect the interplay with the advances already made using semiclassical methods will constitute not only a check of the approximations made in the latter but also a way of finding efficient computational schemes for more complicated scenarios that might incorporate, for instance, the two-band nature of certain compounds. On the other hand, in the context of the quasi-one-dimensional organic conductors, such developments should help to go beyond the one-dimensional approximation.
We would like to thank Ø. Fischer, M. Eskildsen, M. Kugler and G. Levy for discussions about tunneling and STM. We also thank Y. Maeno and M. Sigrist for discussions about tunneling into triplet superconductors and ruthenates in particular. This work was done under the auspices of the Swiss National Science Foundation and its MaNEP program.
[99]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, **, vol. of ** (, , ), ed.
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ** (, , ), , p. , Multiparticle Tunneling, ed.
, , , ****, ().
, , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , **, .
, , , ****, ().
, ****, ().
, , , , ****, ().
, **, vol. of ** (, , ), ed.
, ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, in **, edited by (, , ), Proceedings of the 15$^{th}$ Scottish Universities Summer School in Physics, pp. .
(), .
, in **, edited by , , (, , ), vol. of **, pp. .
, ****, ().
, ****, (), .
, ****, ().
**** (), .
, ****, ().
, , , , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, **, vol. of ** (, , ), ed.
, **, Physics of Solids and Liquids (, , ), ed.
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, .
, in **, edited by , , (, , ), vol. of **, pp. .
, , , , ****, ().
, , , , , ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, .
, , , , , , , ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , , , , ****, (), .
, , , , , ****, ().
, ****, ().
, , , ****, ().
[^1]: Notice that the object $\alpha\psi_{L\bar{\alpha}}$ transforms as the complex conjugate representation of the fundamental representation of . In other words, it has the same transformation properties as $\psi_{L\alpha
}^{\dagger}$.
|
---
abstract: 'Self-replicating systems based on information-coding polymers are of crucial importance in biology. They also recently emerged as a paradigm in material design on nano- and micro-scales. We present a general theoretical and numerical analysis of the problem of spontaneous emergence of autocatalysis for heteropolymers capable of template-assisted ligation driven by cyclic changes in the environment. Our central result is the existence of the first order transition between the regime dominated by free monomers and that with a self-sustaining population of sufficiently long chains. We provide a simple, mathematically tractable model supported by numerical simulations, which predicts the distribution of chain lengths and the onset of autocatalysis in terms of the overall monomer concentration and two fundamental rate constants. Another key result of our study is the emergence of the kinetically-limited optimal overlap length between a template and each of its two substrates. The template-assisted ligation allows for heritable transmission of the information encoded in chain sequences thus opening up the possibility of long-term memory and evolvability in such systems.'
author:
- 'Alexei V. Tkachenko'
- Sergei Maslov
title: 'Spontaneous emergence of autocatalytic information-coding polymers'
---
Introduction
============
Life as we know it today depends on self-replication of information-coding polymers. Their emergence from non-living matter is one of the greatest mysteries of fundamental science. In addition, the design of artificial self-replicating nano- and micro-scale systems is an exciting field with potential engineering applications [@Chaikin; @Brenner]. The central challenge in both of these fields is to come up with a simple physically-realizable system obeying laws of thermodynamics, yet ultimately capable of Darwinian evolution when exposed to non-equilibrium driving forces. Chemical networks of molecules engaged in mutual catalysis is a popular candidate for such a system[@Eigen; @Dyson; @Kauffman; @Jain]. Information-coding heteropolymers In fact, the most successful experimental realization of an autonomous self-replicating system involves a set of mutually catalyzing RNA-based enzymes (ribozymes)[@Szostak_1989] that show evolution-like behavior[@Joyce]. This is viewed as a major evidence for RNA-world hypothesis (see e.g. Refs. . )
The ribozyme activity requires relatively long polymers made of hundreds of nucleotides with carefully designed sequences. Polymers of sufficient length can be generated e.g. by traditional reversible step-growth polymerization that combines random concatenation and fragmentation of polymer chains. Furthermore, the polymer length in this type of process can be drastically increased in non-equilibrium settings such as temperature gradients [@Braun]. However, even when long chains are formed, the probability of the spontaneous emergence of a sequence with an enzymatic activity remains vanishingly small, due to the exponentially large number of possible sequences.
Thus there is a strong need for a mechanism that combines the emergence of long chains with dramatic reduction of informational entropy of the sequence population. A promising candidate for such mechanism is provided by template-assisted ligation. In this process pairs of polymers are brought together by hybridization with a complementary polymer chain serving as the template and eventually ligated to form a longer chain. Unlike the non-templated reversible step-growth polymerization used in Ref. , this mechanism naturally involves the information transmission from a template to the newly-ligated chain, thus opening an exciting possibility of long-term memory and evolvability. An early model involving template-assisted polymerization was proposed by P. W. Anderson and colleagues[@Anderson_Stein; @Anderson]. It also has been subject of more recent experimental and theoretical studies[@Szostak_1996; @Derr]. In particular, in Ref. it has been demonstrated that, for a specific choice of parameters, a combination of non-template and template-assisted ligation can lead to the emergence of long (around 100 monomers) oligonucleotides.
In this work we carried out the theoretical and numerical analysis of a generic system in which the polymerization is driven solely by template-assisted ligation. Unlike in the models with significant contribution of non-templated concatenation, the emergence of long chains in our system represents a non-trivial chicken-or-egg problem. Indeed, the formation of long chains depends on the presence of other chains serving as templates. In our model the “primordial soup” of monomers is driven out of equilibrium by cyclic changes in physical conditions such as temperature, salt concentration, pH, etc. (see Fig. 1ab). Polymerization occurs during the “night” phase of each cycle when the existing heteropolymers serve as templates for formation of progressively longer chains. During the “day” phase of each cycle all multi-chain structures separate and the system returns to the state of dispersed individual polymers.
We consider a general case of information-coding heteropolymers composed of $z$ types of monomers capable of making $z/2$ mutually complementary pairs. For example, RNA is made of $z=4$ monomers forming $2$ complementary pairs $A-U$ and $C-G$ responsible for double-stranded RNA structure. Similarly, we assume that hybridization between complementary segments of our generalized polymers results in formation of a double-stranded structure. During the night phase of each cycle chains form a variety of hybridized complexes. The ligation takes place in a special type of such complexes shown in Fig. 1b. The end groups of two “substrate” chains $S_1$ and $S_2$ are positioned next to each other by the virtue of hybridization with the third, “template” polymer $T$. Once the substrates are properly positioned, the new covalent bond joining them together is formed at a constant rate. We further assume that each of the intra-polymer bonds can spontaneously break at a constant rate making the overall fragmentation rate of a chain proportional to its length.
![*[ **The schematic representation of fundamental processes in our system.** ]{}a) The “day” phase during which all hybridized complexes between heteropolymers dissociate and ligation completely stops, while fragmentation continues in all phases of the cycle. b) The “night” phase during which some polymer chains hybridize and then undergo template-assisted ligation. The ends of substrates $S_1$ (green) and $S_2$ (red) hybridized with a template $T$ (purple) are ligated at a constant rate with the newly formed bond shown in blue. c) If the “night” phase is sufficiently long heteropolymers enter the aggregation regime in which ligation effectively stops.* []{data-label="fig1"}](fig1.eps){width="1\columnwidth"}
If one was to leave a mixture of polymers in the night phase long enough, hybridization of multiple chains would result in the formation of a gel-like aggregate shown in Fig. 1c, effectively stopping ligation. During the day phase of the cycle (Fig. 1a) all structures of hybridized polymers dissociate while keeping their stronger internal bonds intact. Thus the day phase plays the role of the “reset” returning the system to a mixture of free polymers ready for the next night phase.
One of the major assumptions used in our study is the Random Sequence Approximation (RSA) according to which each monomer in every chain can be of any type with equal probability $1/z$. On the one hand, the RSA greatly simplifies the problem and allows us to get a concise analytical solution. On the other hand, in order to understand the transmission of sequence-encoded information and the long-term memory in our system this approximation need to be relaxed in future studies.
Results
=======
Optimal overlap length $k_0$
----------------------------
In general the interaction strength between any two chain segments increases with the overlap length $k$ of the region over which they are complementary to each other. Here we assume a simple linear relationship in which the binding free energy is given by $\Delta G_0+k \cdot \Delta G$, where $\Delta G$ is the (negative) binding free energy between two complementary monomers, while $\Delta G_0$ is the initiation free energy.
The equilibrium hybridization probability emerges out of the competition between two opposing kinetic processes of association and dissociation. On the one hand, the association rate exponentially decreases with the overlap length $k$ since the probability of finding a pair of polymers with complementary sequences of length $k$, is proportional to $1/z^k$. On the other hand, the dissociation rate between a substrate and its template also exponentially decreases with $k$ as $\exp(-k \cdot \Delta G/k_{B}T)$ due to greater thermodynamic stability of longer complementary duplexes. The net result is that the hybridization probability is proportional to $\exp(k \cdot \epsilon)$, where $$\epsilon=-\Delta G/k_{B}T-\log(z)
\label{eq_epsilon}$$ is the effective parameter combining thermodynamic and combinatorial factors. Template-assisted ligation happens at appreciable rates only for $\epsilon>0$, i.e. when $\Delta G<-k_{B}T\log(z)$. For a finite time window $t$ only the duplexes with short overlaps will reach this equilibrium. Duplexes with longer overlaps have lifetimes much longer than $t$. Thus for them the hybridization probability is limited by the association rate alone $\sim 1/z^k$. Therefore, the overall hybridization probability as a function of $k$ is strongly peaked (see Fig. 2 and Appendix A for details). As time $t$ increases, this peak slowly (logarithmically in $t$) shifts towards larger values of $k$ with its final value $k_0$ set by either the end of the night phase or (in case of long nights) by the onset of the aggregation regime (Fig. 1c).
![*[**Time evolution of the hybridization probability.**]{} The probability that a segment of length $k$ is hybridized to its complementary partner (Eq. (\[S8\]) in Appendix A) is strongly peaked at $k=k_0 \sim \log t$ (see Eq. (\[S9\]) in Appendix A). Different colors from red to violet correspond to linearly increasing times $t$ since the beginning of the night phase of the cycle.* []{data-label="fig2"}](fig2.eps){width="1\columnwidth"}
Major parameters of the model
-----------------------------
In what follows we focus on slow dynamical processes taking place over multiple day/night cycles. The main input parameter from the intra-night kinetics to the multi-cycle dynamics is the hybridization cutoff length $k_0$ discussed above. The multi-cycle dynamics can be described in terms of time-averaged ligation and fragmentation rates, $\lambda$ and $\beta$, respectively. We define $\lambda$ as the rate of bond formation provided that the ends of the two substrates are already properly positioned next to each other due to their hybridization with the template. We further assume that the characteristic fragmentation time $1/\beta$ is much longer than the duration of the day-night cycle ensuring the separation between short and long timescales in the problem. Both $\lambda$ and $\beta$ are averaged over the duration of the day-night cycle with the understanding that fragmentation happens continuously throughout the cycle (possibly with different day and night rates), while the ligation only occurs during the night phase. Thus $\lambda$ implicitly depends on relative durations of night and day phases.
Let $C$ be the overall monomer concentration including both free monomers and those bound in all chains. In the case of random sequence composition, the population of heteropolymers is fully characterized by their length distribution $f_l$, defined in such a way that $C \cdot f_l$ is the concentration of all polymers of length $l$. By this definition $f_l$ is subject to the normalization condition $\sum_{l=1}^{\infty} l f_l=1$. The fraction of polymers with a specific sequence is then given by $z^{-l}\cdot f_l$.
Detailed balance ansatz
-----------------------
For template-assisted ligation the effective two-polymer merger rate $\mu$ is given by the ligation rate $\lambda$ multiplied by the probability of hybridization of a template $T$ with two substrates $S_1$ and $S_2$ bring them into end-to-end configuration shown in Fig. 1b. The major step in constructing an approximate analytical solution of the problem is the [*assumption of a detailed balance*]{} between template-assisted ligation and fragmentation in the steady state of the system: $$\beta f_{l+m}=\mu f_l \cdot f_m \qquad .
\label{eq1}$$ Here, the left-hand side describes the rate at which a chain of length $l+m$ breaks into two pieces of lengths $l$ and $m$ correspondingly. Conversely, the right-hand side is the effective merger rate (hybridization + ligation) at which polymers of lengths $l$ and $m$ are joined to form a longer chain of length $l+m$. Note that according to this description the rate at which a polymer breaks into arbitrary two pieces is proportional to its length or rather its number of intra-polymer bonds.
The detailed balance approximation is not a priory justified in driven, non-equilibrium systems such as ours. However, for chains longer than the optimal overlap length $k_0$ the probability of hybridization and thus the effective merger rate $\mu$ saturate (see Methods for derivation and details) . Once both $\mu$ and $\beta$ are independent of polymers’ lengths, our system becomes mathematically equivalent to the well known reversible step-like polymerization process for which the detailed balance approximation hold true by the virtue of laws of equilibrium thermodynamics. In spite of this superficial similarity, our system remains intrinsically non-equilibrium since the effective merger rate $\mu$ depends on hybridization between templates and substrates cycled through day and night phases as shown in Fig. 1ab. In addition, the Eq. (\[eq1\]) is expected to break down for chains shorter than $k_0$.
To validate our mathematical insights, the analytic solution shown below was followed by numerical simulations of the system carried out [*without the detailed balance approximation*]{}. The agreement between our analytical and numerical results for polymers longer than $k_0$ confirms the validity of our approach. The Eq. (\[eq1\]) is satisfied by the exponential length distribution: $$f_l=(\beta/\mu)\exp(-l/\bar{L}) \quad ,
\label{eq_fl}$$ where the characteristic chain length, $\bar{L}$, is determined by the normalization condition $\sum_{l=1}^{\infty} l f_l=1$ or $(\beta/\mu) \bar{L}^2 =1$. This result was obtained by replacing the discrete sum with the integral, which works in the limit $\bar{L} \gg 1$ (see Eq. \[SX\] in Appendix B for the exact formula in which this approximation is relaxed). Hence, the characteristic chain length in the steady state exponential distribution is given by $$\bar{L}=\sqrt{\frac{\mu}{\beta}}
\label{eq_Lbar}$$
Onset of autocatalysis
----------------------
$\mu$ is an effective two-polymer merger rate proportional to the probability of finding two terminal ends attached to a template followed by ligation. This probability depends on (a) the overall concentration $C$ and the length distribution of potential templates (b) the strength and kinetics of interactions between the complementary segments on a template and its two substrates.
For short overlaps $k \leq k_0$ the hybridization probability follows the equilibrium formula: $\sim \exp(k \cdot \epsilon)$. This increase is followed by an abrupt drop for $k>k_0$ (see Fig. 2). By neglecting the contribution of overlap lengths longer than $k_0$ one gets $$\begin{aligned}
\mu&=&\lambda \left(\frac{C}{C_0}\right)^2
\sum_{k_1=1}^{k_0}\exp(k_1 \cdot \epsilon) \sum_{k_2=1}^{k_0}\exp(k_2 \cdot \epsilon) \cdot \nonumber \\
&\cdot &\sum_{l=k_1+k_1}^{\infty}(l-k_1-k_2+1)f_l \quad .
\label{eq_lambda} \end{aligned}$$ Here $\lambda$ is the bare ligation rate, $k_1$ and $k_2$ are the overlap lengths between the template and each of the two substrates. We also introduced the reference concentration $C_0=\exp[-\Delta G_0/k_{B}T]$ (in molar) absorbing the initiation free energy. The term $(C/C_0)^2$ reflects the fact that the template-assisted ligation is a three-body interaction involving two substrates and one template. The last sum in the r.h.s. of the Eq. (\[eq\_lambda\]) is equal to the probability of finding a template region of length $k_1+k_2$ within a longer heteropolymer. It takes into account that a chain of length $l \geq k_1+k_2$ has $l-k_1-k_2+1$ sub-sequences of length $k_1+k_2$. Requirements of sequence complementarity between the template and each of two substrates were absorbed into the definition of $\epsilon$ within the RSA.
Substituting the exponential distribution $f_l$ given by the Eq. (\[eq\_fl\]), performing the triple summation in Eq. (\[eq\_lambda\]), and neglecting the terms $\sim 1/\bar{L}$ (but not $\sim k_0/\bar{L}$) within the exponents approximately gives $\mu=\lambda (C/C_0)^2\exp(2k_0 \cdot (\epsilon -1/\bar{L}) )/\left[1-\exp(-\epsilon)\right]^2$. Substituting this expression into the Eq. (\[eq\_Lbar\]) results in the self-consistency equation for $\bar{L}$: $$\bar{L}\exp\left(\frac{k_0}{\bar{L}}\right)=\frac{C}{C_0} \cdot \sqrt{\frac{\lambda}{\beta}}
\cdot \frac{\exp\left(k_0 \epsilon \right)}{1-\exp(-\epsilon)} \quad ,
\label{eq_sc_SI}$$ (see the Eq. (\[eq\_sc\]) in the Appendix B for a more precise expression derived without the large $\bar{L}$ approximation). The l.h.s. of this equation reaches its minimal value of $e \cdot k_0$ at $\bar{L}=k_0$. As a result, the equation has solutions only for concentrations $C$ above a certain threshold value given by $$C_{down}=k_0C_0\sqrt{\frac{\beta}{\lambda}}\exp(1-k_0 \epsilon)\cdot (1-\exp(-\epsilon))
\label{eq_c_down}$$ For $C$ significantly larger than this threshold, one can neglect the exponential term in the l.h.s. of the Eq. (\[eq\_sc\_SI\]) so that the characteristic polymer length $\bar{L}$ linearly increases with the concentration as $$\bar{L}=\frac{C}{C_0} \cdot \sqrt{\frac{\lambda}{\beta}} \cdot \frac{\exp(k_0 \epsilon)}
{1-\exp(-\epsilon)} \quad .
\label{eq_lbar_vs_C}$$ For monomer concentrations $C$ below the threshold we don’t expect long heteropolymers to form. This suggests a first-order transition between the regimes dominated by free monomers and that with a self-sustaining population of long heteropolymeric chains.
To verify and refine our predictions we approach this transition from below, starting with the state dominated by monomers i.e. $f_1 \simeq 1$. We explore the stability of the monomer mixture with respect to formation of dimers. In this limit, the dimer fraction $f_2$ obeys the following kinetic equation: $$\frac{df_2}{dt}=-\beta f_2+\lambda \left(\frac{C}{C_0}\right)^2 \exp(2\epsilon) f_1^2f_2 \qquad ,
\label{eq_dimer}$$ where the second term in the r.h.s. reflects the fact that a dimer can be formed out of two monomers and this process needs to be catalyzed by a complementary dimer. The critical concentration $C_{up}$ above which dimers would exponentially self-amplify is given by $$C_{up}=C_0 \sqrt{\frac{\beta}{\lambda}}\exp(-\epsilon) \qquad .
\label{eq_c_up}$$ Thus we confirm the existence of an instability in a mixture of monomers with respect to template-assisted formation of longer chains. Note that, as expected for a first-order phase transition, the instability threshold $C_{up}$ (Eq. (\[eq\_c\_up\])) approached from below exceeds the instability threshold $C_{down}$ (Eq. (\[eq\_c\_down\])) approached from above. Thus, as expected for a first-order phase transition, the system will be hysteretic for $C_{down}<C<C_{up}$.
Numerical results
-----------------
To check our calculations we carried out the detailed numerical simulations of our system. Specifically, we numerically solved a system of coupled kinetic equations describing the template-assisted ligation and fragmentation processes and calculated the steady state distribution $f_l$: $$\begin{aligned}
\frac{1}{2\beta}\dot{f}_{n} &=& -\left[ \frac{n}{2}+\Gamma^{2}\sum\limits_{m}%
\mu_{n,m}f_{m}\right] f_{n}+\sum\limits_{m>n} f_{m}+\nonumber \\
&&+\Gamma^{2}\sum\limits_{m<n}\frac{\left(
1+\delta_{n-m,m}\right) }{2}\mu_{m,n-m}f_{m}f_{n-m} \qquad .
\label{kin2_detailed}%\end{aligned}$$ Here $\Gamma$ is the dimensionless control parameter of the model proportional to the monomer concentration: $$\Gamma=\left(\frac{C}{C_{0}}\right)\sqrt{\frac{\lambda}{\beta}} \label{nu}%$$ and $\mu_{nm}$ is the merger matrix, which itself linearly depends on the distribution $f_l$ as described in Eq. (\[S17\]) and (\[S1\]) in Appendices A and C. Note, that these simulations (unlike our analytical theory) allow for overlap length dependence of merger rates do not use the detailed balance ansatz.
The results of these numerical simulations are in excellent agreement with our analytical calculations. For high enough concentrations $C$ the length distribution $f_l$ has a long exponential tail covering the region $l>k_0$. Chains of length shorter than $k_0$, which do not obey the detailed balance, exhibit a much faster decay as a function of $l$ (see Fig. 3).
![*[**Chain length distributions.**]{} A set of chain length distributions $f_l$ plotted for different values of the control parameter $\Gamma=\frac{C}{C_0}\sqrt{\frac{\lambda}{\beta}}$ as found by numerical simulations with $k_0=3$ and $\epsilon=1$. Distributions in the autocatalytic regime are characterized by long exponentially distributed tails for chains with $l>k_0$. Note a sharp transition between monomer-dominated and autocatalytic regimes.*[]{data-label="fig3"}](fig3.eps){width="1\columnwidth"}
Our simulations also confirmed the existence of a first-order transition to a regime dominated by monomers as concentration $C$ was reduced (the red line in Fig. 4). The decay length $\bar{L}$ of the exponential tail of $f_l$ for $l \geq k_0$ plays the role of the order parameter in this transition. When plotted as a function of concentration $C$ in Fig. 4, it exhibits sharp discontinuities and hysteretic behavior. Our analytical results given by the Eq. (\[eq\_sc\]) (black dashed line in Fig. 4) are in a good agreement with our numerical simulations.
![*[**A hysteretic first order transition between the monomer-dominated and autocatalytic regimes.**]{} Different lines/symbols show the characteristic length $\bar{L}$ in our numerical simulations with $k_0=3$ for increasing (diamonds), and decreasing (circles) concentration $C$, correspondingly. The dashed line is the prediction of our simplified model given by the Eq. (\[eq\_sc\]). Arrows indicate $C_{up}$ and $C_{down}$ given by Eqs. (\[eq\_c\_up\]) and (\[eq\_c\_down\]) correspondingly.* []{data-label="fig4"}](fig4.eps){width="1\columnwidth"}
The transitions from monomers to long-chained polymers and back in our numerical simulations occur at concentrations somewhat higher than their theoretically predicted values $C_{up}$ (Eq. \[eq\_c\_up\]) and $C_{down}$ (Eq. \[eq\_c\_down\]) marked in Fig. 4 by the blue and red arrows respectively.
Long-night limit
----------------
Our model assumes cyclic changes between “day” and “night” phases. In the beginning of each night phase all polymers are unhybridized, but as time progresses they start forming duplexes of progressively longer lengths. The probability of finding any given segment in a duplex remains low at the early stage of this process. However, if the duration of the night phase is long enough, there would be a time point at which individual polymers would on average have around one hybridized partner. Note that a single polymer may simultaneously have more than one hybridized partner as long as the duplexes with different partners do not overlap with each other. Around this time most polymers in our pool would become immobilized in a gel-like structure schematically depicted in Fig. 1c. At this point the formation of new hybridized complexes effectively stops and the value of $k_0$ stops growing. An indirect experimental evidence for such aggregation phase was recently reported by Bellini [*et al.*]{}[@Bellini].
According to our results, the characteristic chain length $\bar{L}$ given by Eq. (\[eq\_lbar\_vs\_C\]) exponentially increases with $k_0$. In the presence of aggregation this growth is eventually arrested. The upper bound on $\bar{L}$ reached in this case can be determined self-consistently by requiring that individual polymers on average have around one hybridized partner. A chain of length $\bar{L} \gg k_0$ contains $\bar{L}-k_0+1 \simeq \bar{L}$ segments of length $k_0$. The probability of each of these segments to be hybridized at any particular time is $(C/C_0) \cdot \exp(k_0 \cdot \epsilon)$. Thus the transition to the aggregated state is expected when $$\bar{L}\frac{C}{C_0} \exp(k_0\epsilon) \simeq 1 \quad .
\label{eq_gel}$$ Combining this expression with the Eq. (\[eq\_lbar\_vs\_C\]) and ignoring the factors of order of 1 one gets the upper bound $\bar{L}_{max}$ on the characteristic polymer length that could, in principle, be reached by increasing the duration of the night phase: $$\bar{L}_{max} \simeq
\left(\frac{\lambda}{\beta}\right)^{\frac{1}{4}}
\qquad .
\label{eq_Lmax}$$
Discussion
==========
To summarize, above we considered a general case of random heteropolymers capable of template-assisted ligation. As such our model is applicable to both nucleic acids at the dawn of life as well as to artificial self-replicating nano- or micro- structures[@Chaikin; @Brenner]. The major conclusions of our study are as follows. We demonstrated that a population of long chains can be sustained by mutual catalysis sustained exclusively by template-assisted ligation. This state is separated from the monomer-dominated one by a hysteretic first order phase transition (Eqs. (\[eq\_c\_down\],\[eq\_c\_up\])) as a function of the concentration. We also demonstrated that the template-assisted ligation in our system is dominated by contributions from template-substrate pairs complementary over a well-defined length $k_0$, that is kinetically limited. The average length of heteropolymers exponentially increases with $k_0$, with the upper bound given by a very simple expression, Eq. (\[eq\_Lmax\]), depending only on the ratio between the ligation and the breakage rates.
[ The spontaneous emergence of long polymers ]{} demonstrated in our study is of conceptual importance to the long-standing problem of the origin of life. Indeed, we offer a physically plausible path leading from the primordial soup dominated by monomers to a population of sufficiently long self-replicating chains. This transition is one of the least understood processes in the RNA-world hypothesis. It is known that functional RNA-based enzymes (ribozymes) need to be sufficiently long, which makes their spontaneous formation prohibitively unlikely. According to our analysis, both the characteristic chain length and the minimal monomer concentration required for autocatalysis depend on the ratio of ligation and breakage rates. Large values of this ratio $\lambda/\beta \gg 1$ would allow long chains to form at physically possible concentrations $C \ll 1$M. One of the reasons that such spontaneous emergence of long-chained polymers has never been observed is that in experimental systems studied so far the ratio $\lambda/\beta$ remained low due to a very slow ligation process[@Szostak_1996]. Note that ligation and breakage processes in our system are not direct opposites of each other. Indeed, the ligation of e.g. nucleic acids requires activated terminal bases carrying free energy sufficient to form a new intra-polymer bond. To achieve the conditions necessary for our autocatalytic regime one needs to either use heteropolymers chemically different from modern nucleic acids or to develop new activation pathways different from what has been used in experiments so far. The ligation can be further assisted e.g. by the absorption of polymers onto properly selected crystalline interfaces. The present study was limited to the simplest version of the problem in which sequences of all heteropolymers were assumed to be completely random. [ It provides a useful analytically solvable null-model against which future variants can be benchmarked. Even though the informational entropy of the pool of polymers in our model is at its maximal value, the template-assisted ligation provides a mechanism for faithful transmission of information to the next generation. We demonstrated that the spontaneous emergence of long chains is possible even in the limit where direct (non-templated) bond formation is negligible. This is especially important since non-templated polymerization is a regular equilibrium phenomenon and as such has a short memory. In contrast, heritable transmission of sequence information via template-assisted ligation opens up an exciting possibility of long-term memory effects and ultimately of the Darwinian evolution in the space of polymer sequences. Incorporation of sequence effects is the logical next step in the development of our model, and we are currently working on it. ]{} There are several conceptually distinct yet non mutually exclusive scenarios giving rise to over-representation of certain sequences in the pool of heteropolymers. The first one is driven by the sequence dependence of model parameters such as hybridization free energies, fragmentation and ligation rates, and monomer composition of the primordial soup. The other scenario is the spontaneous symmetry breaking in the sequence space[@Anderson; @Goldenfeld]. Specifically, our results obtained within the Random Sequence Approximation need to be checked for local and global stability. The local stability analysis deals with small deviations from a state in which populations of all sequences are equal to each other, while the global one perturbs the system by strongly over-representing a small subset of sequences. [ This can be interpreted, correspondingly, as weak and strong selection limits.]{} Evidence of local or global instability would signal a symmetry breaking and would provide a scenario for the dramatic decrease in informational entropy of the population of polymers. [ This is analogous to replica symmetry breaking suggested by P.W. Anderson [@Anderson] leading to a population dominated by a relatively small subset of mutually catalyzing sequences.]{}
Acknowledgements {#acknowledgements .unnumbered}
================
This research used resources of the Center for Functional Nanomaterials, which is a U.S. DOE Office of Science User Facility, at Brookhaven National Laboratory under Contract No. DE-SC0012704. Work at Biosciences Department was supported by US Department of Energy, Office of Biological Research, Grant PM-031. We would like to thank Prof. Mark Lukin, Stony Brook University for valuable discussions.
Appendix A: $k$-mers and their hybridization dynamics {#appendix-a-k-mers-and-their-hybridization-dynamics .unnumbered}
=====================================================
To describe the hybridization dynamics during the night phase we introduce the concept of a $k-$mer defined as the segment of $k$ monomers with the specific sequence $\sigma$ within a longer chain of length $l \geq k$. Let $C \cdot p_{k}^{\left(\sigma\right)}$, be the concentration of $k-$mers with particular sequence $\sigma$. Let $C \cdot P_{k}$ be the concentration of all $k-$mers of length $k$, regardless of their sequences. By definition, $P_{k}=\sum_{\sigma}p_{k}^{\left( \sigma\right)}$. If all the sequences are completely random, $p_{k}^{\left( \sigma\right) }=P_{k}z^{-k}$. Each chain of length $l$ contains $\left( l+1-k\right) $ ’$k$-mers’, therefore $$P_{k}=\sum\limits_{l=k}^{\infty}\left(l+1-k\right) f_{l}
\label{S1}$$ Note that $P_{k}$ has the maximum value of $1$ which is approached in the limit when all chains are much longer than $k$.
We consider a problem of hybridization of polymers since the start of the night phase of the cycle when all of them are not hybridized. To describe the hybridization kinetics we use the fractions of fully hybridized k-mers $1 \geq \varphi_{k}^{\left( \sigma\right) }\left( t\right) \geq 0$ as our dynamic variables. By definition, the concentration of such pairs of bound k-mers is $C\cdot p_{k}^{\left( \sigma\right) }\varphi
_{k}^{\left( \sigma\right) }\left( t\right) $. We note that hybridization states of different $k$-mers are not independent from each other since some of them overlap. To account for this, we introduce one more variable $\psi_{k}^{\left(\sigma\right)} \leq 1-\varphi_{k}^{\left(\sigma\right)}$ which is the fraction of all $k$-mers with a given sequence $\sigma$ that are available for hybridization. Now the binding kinetics of all $k-$mers can be described by the following set of coupled kinetic equations:$$\tau \dot{\varphi}_{k}^{\left( \sigma\right) }=C\cdot p_{k}^{\left(
\sigma'\right) }\psi_{k}^{\left( \sigma'\right) }\psi
_{k}^{\left( \sigma\right) }-\exp\left( \frac{\Delta G_{\sigma}%
}{k_{B}T}\right) \varphi_{k}^{\left( \sigma\right) } \label{kin}%$$ Here $1/\tau$ is the hybridization rate, $\Delta G_{\sigma}$ is the hybridization free energy for a given sequence $\sigma$, and $ \sigma'$ is the sequence complementary to $\sigma$. For simplicity, we consider a symmetric case where mutually complementary $k$-mers have the same fraction, $p_{k}^{\left( \sigma\right) }=p_{k}^{\left( \sigma'\right)
}$ . In order to solve these equation, one needs to specify a relationship between fraction of available k-mers $\psi_{k}^{\left( \sigma\right) }$ and hybridization probabilities, $\varphi_{k}^{\left( \sigma \right) }$, that would take into account mutual overlap of the sequences. However, at early stages the hybridization probability remains sufficiently low, and one can therefore assume $\psi_{k}^{\left( \sigma\right) }=\psi
_{k}^{\left( \sigma'\right) }\approx 1$ in Eq. (\[kin\]). This results in a set of decoupled equations $$\tau\dot{\varphi}_{k}^{\left( \sigma\right) }=C\cdot p_{k}^{\left(
\sigma\right) }-\exp\left( \frac{\Delta G_{\sigma}}{k_{B}T}\right) \varphi_{k}^{\left( \sigma\right) }$$ The solution is the exponential relaxation of hybridization variables $\varphi_{k}^{\left( \sigma\right) }$ towards their equilibrium values: $$\varphi_{k}^{\left( \sigma\right) }\left( t\right) =K_{k}^{\left(
\sigma\right) }p_{k}^{\left( \sigma\right) }\left( 1-\exp\left(
-\frac{t}{\tau_{k}^{\left( \sigma\right) }}\right) \right)$$ In this expression $$K_{k}^{\left( \sigma\right) }=C\exp\left( -\frac{\Delta
G_{\sigma}}{k_{B}T}\right)$$$$\tau_{k}^{\left( \sigma\right) }=\tau\exp\left( - \frac{\Delta
G_{\sigma}}{k_{B}T}\right) \qquad .$$
The single most important factor that determines the hybridization free energy $\Delta G_{\sigma}$ is the sequence length $k$. For simplicity of the analysis we will replace $K_{k}^{\left( \sigma\right) }$ with its sequence- averaged value:$$K_{k}^{\left( \sigma\right) }\approx K_{k}=C\exp\left(
-\frac{\Delta G_{0}+k \Delta G}{k_{B}T}\right)$$ This leads to the following result:$$\begin{aligned}
\varphi_{k}\left( t\right) =C P_{k}z^{-k}\exp\left( -
\frac{\Delta G_{0}+k \Delta G}{k_{B}T}\right) \cdot \nonumber \\
\cdot \left( 1-\exp\left[
-\frac{t}{\tau}\exp\left(\frac{\Delta G_{0}+k \Delta G}{k_{B}T}\right) \right] \right)
\label{S8}\end{aligned}$$
As shown in Figure 4 at any given time $t$ this expression is strongly peaked at a single value of $k$, which weakly (logarithmically) depends on time:$$k\approx k_{0}\left( t\right) \simeq -\frac{k_{B}T}{\Delta G}\log\left( \frac{t}{\tau}\right)
\label{S9}$$$$\varphi_{k_{0}}\simeq CP_{k_{0}}\exp\left( -\frac{\Delta G_{0}}%
{k_{B}T}+\varepsilon k_{0}\right)$$
Appendix B: Evaluating the effects of a finite $\bar{L}$. {#appendix-b-evaluating-the-effects-of-a-finite-barl. .unnumbered}
==========================================================
The equations (\[eq\_fl\]) and (\[eq\_sc\_SI\]) in the main text were derived in the limit $\bar{L} \gg 1$. Below we will relax these approximations to derive the exact formula working for arbitrary $\bar{L}$.
In deriving the Eq. (\[eq\_fl\]) in the main text we replaced the discrete summation with an integral. This approximation can be avoided by performing an explicit summation of the discrete geometric progression: $$\sum_{l=1}^{\infty} l \cdot \exp(-\frac{l}{\bar{L}})=
\frac{\exp\left(-\frac{1}{\bar{L}}\right)}
{\left[1-\exp\left(-\frac{1}{\bar{L}}\right)\right]^2}=
\frac{1}{4\sinh\left(\frac{1}{2\bar{L}}\right)^2} \quad .$$
This amounts to replacing $\bar{L}$ in Eq. (3) with $\frac{1}{2\sinh\left(\frac{1}{2\bar{L}}\right)}$: $$\frac{1}{2\sinh\left(\frac{1}{2\bar{L}}\right)}=\sqrt{\frac{\mu}{\beta}} \qquad .
\label{SX}$$
The exact triple summation of the Eq. (\[eq\_lambda\]) in the main text $$\begin{aligned}
\mu=\lambda \left(\frac{C}{C_0}\right)^2
\sum_{k_1=1}^{k_0}\exp(k_1 \cdot \epsilon) \sum_{k_2=1}^{k_0}\exp(k_2 \cdot \epsilon) \cdot \nonumber \\
\cdot \sum_{l=k_1+k_1}^{\infty}(l-k_1-k_2+1)f_l \quad . \end{aligned}$$ for $f_l \sim \exp(-l/\bar{L})$ can be carried out in two steps. First, the sum over $l$ combined with normalization $\sum_{l} l \cdot f_l=1$ gives rise to $$\begin{aligned}
\mu=\lambda \left(\frac{C}{C_0}\right)^2 \exp(1/\bar{L})
\sum_{k_1=1}^{k_0}\exp \left[k_1 \cdot (\epsilon-1/\bar{L})\right] \cdot \nonumber \\
\cdot \sum_{k_2=1}^{k_0}\exp \left[k_2 \cdot (\epsilon-1/\bar{L}) \right] \quad .\end{aligned}$$ The discrete summation over $k_1$ and $k_2$ results in $$\mu=\lambda \left(\frac{C}{C_0}\right)^2 \exp(1/\bar{L})
\left( \frac{\exp[k_0(\epsilon -1/\bar{L})] -1}{1-\exp(-\epsilon+1/\bar{L})}\right)^2 \qquad .
\label{eq_mu1}$$ The Eq. (\[eq\_sc\_SI\]) then becomes $$\begin{aligned}
\frac{1}{2\sinh\left(\frac{1}{2\bar{L}}\right)}\exp\left(\frac{k_0-1/2}{\bar{L}}\right)= \nonumber \\
=\frac{C}{C_0} \cdot \sqrt{\frac{\lambda}{\beta}}
\cdot \frac{\exp\left(k_0 \epsilon \right)-\exp\left(k_0/\bar{L} \right)}{1-\exp(-\epsilon+1/\bar{L})} \quad .
\label{eq_sc}\end{aligned}$$ Here we neglected the exponentially small term in the enumerator of the r.h.s. of Eq. (\[eq\_mu1\]). The dashed line in Fig. 4 shows $\bar{L}$ defined by this equation plotted as a function of $C$.
Appendix C: Ligation-fragmentation kinetics {#appendix-c-ligation-fragmentation-kinetics .unnumbered}
===========================================
The Eq. (\[eq\_sc\_SI\]) describes the effective merger rate $\mu$ when lengths $n$ and $m$ of two substrate chains hybridized to a template are longer than $k_{0}$. In a more general case one needs to introduce length-dependent effective merger rate $\mu_{nm}$. Under RSA this rate is given by: $$\begin{aligned}
&&\mu_{nm}=\lambda C^2\sum\limits_{k_{1}=1}^{\min\left(
n,k_{0}\right) }\sum\limits_{k_{2}=1}^{\min\left( m,k_{0}\right) }%
\frac{P_{k_{1}+k_{2}}}{z^{k_{1}+k_{2}}} \cdot \nonumber \\
&&\cdot \exp\left( -\frac{2\Delta G_{0}+\left(k_{1}+k_{2}\right) \cdot \Delta G}{k_{B}T} \right) = \nonumber \\
&&=\lambda \left (\frac{C}{C_0}\right)^2 \cdot \sum\limits_{k_{1}=1}^{\min\left(
n,k_{0}\right) }
\sum\limits_{k_{2}=1}^{\min\left( m,k_{0}\right) }%
P_{k_{1}+k_{2}} \cdot \nonumber \\
&& \cdot \exp \left( \left( k_{1}+k_{2}\right)\cdot \epsilon \right)\end{aligned}$$ Here $\mu_{nm}$ corresponds to a particular order in which chains $n$ and $m$ merge into a longer chain. Note that for directed chains such as nucleic acids there are two ways of merging chains, while for undirected polymers there are four.
For nucleic acids, when two chain segments are bound to the same template and are directly adjacent to each other (Fig. 1ab) there is an additional gain in free energy $\Delta G_{st}$ due to stacking. It is straightforward to incorporate $\Delta G_{st}$ into our formalism by redefining $C_{0}$ as $C_0=\exp[-(\Delta G_0+\Delta G_{st}/2)/k_{B}T]$ (in molar).
For directed polymers the resulting set of kinetic equations can be written as:$$\begin{aligned}
&&\frac{1}{2\beta}\dot{f}_{n} = -\left[ \frac{n}{2}+\Gamma^{2}\sum\limits_{m}%
\mu_{n,m}f_{m}\right] f_{n}+\sum\limits_{m>n} f_{m}+\nonumber \\
&&+\Gamma^{2}\sum\limits_{m<n}\frac{\left(
1+\delta_{n-m,m}\right) }{2}\mu_{m,n-m}f_{m}f_{n-m} \qquad .
%\nonumber \\
%&&+\sum\limits_{m>n} f_{m}
\label{kin2_SI}%\end{aligned}$$ Here $\Gamma$ is the dimensionless control parameter of the model which is proportional to monomer density:$$\Gamma=\left(\frac{C}{C_{0}}\right)\sqrt{\frac{\lambda}{\beta}} \label{nu_SI}%$$ and $\mu_{nm}$ is the “$k$-mer”- dependent ligation matrix:$$\mu_{nm}=\sum\limits_{k_{1}=1}^{\min\left( n,k_{0}\right) }%
\sum\limits_{k_{2}=1}^{\min\left( m,k_{0}\right) }P_{k_{1}+k_{2}}\exp\left(
\epsilon \cdot \left( k_{1}+k_{2}\right) \right) \qquad .
\label{S17}$$ This set of kinetic equations gives a complete description of the system in question and was numerically integrated to compare with our analytical results.
[99]{} T. Wang, R. Sha, R. Dreyfus, M. E. Leunissen, C. Maass, D. J. Pine, P. M. Chaikin, and N. C. Seeman, Nature [**478**]{}, 225 (2011). Z. Zeravcic, and M. P. Brenner, Proc. Natl. Acad. Sci. USA [**111**]{}, 1748 (2014). M. Eigen and P. Schuster, Naturwissenschaften [**64**]{}, 541 (1977). F. J. Dyson, Journal of Molecular Evolution [**18**]{}, 344 (1982). S.A. Kauffman, Journal of Theoretical Biology [**119**]{}, 1 (1986). S. Jain and S. Krishna, Phys. Rev. Lett. [**81**]{}, 5684 (1998). J. A. Doudna, J. W. Szostak, Nature [**339**]{}, 519 (1989). T. A. Lincoln and G. F. Joyce, Science [**323**]{}, 1229 (2009). W. Gilbert, Nature [**319**]{}, 618 (1986). L. E. Orgel, Critical Reviews in Biochemistry and Molecular Biology [**39**]{}, 99 (2004). M. P. Robertson and G. F. Joyce, Cold Spring Harbor Perspectives in Biology [**4**]{}, a003608 (2012). C. B. Mast, S. Schink, U. Gerland, and D. Braun, Proc. Natl. Acad. Sci. USA [**110**]{}, 8030 (2013). P.W. Anderson, Proc. Natl. Acad. Sci. USA [**80**]{}, 3386 (1983). P. W. Anderson, and D. L. Stein, in [*Self-Organizing Systems*]{} edited by F. E. Yates, A. Garfinkel, D. O. Walter, and G. B. Yates, (Springer US, 1987) pp. 445-457. J. Derr, M. L. Manapat, S. Rajamani, K. Leu, R. Xulvi-Brunet, I. Joseph, M. A. Nowak, and I. A. Chen, Nucleic Acids Research [**40**]{}, 4711 (2012). T. Bellini, G. Zanchetta, T. P. Fraccia, R. Cerbino, E. Tsai, G. P. Smith, M. J. Moran, D. M. Walba, and N. A. Clark, Proc. Natl. Acad. Sci. USA [**109**]{}, 1110 (2012). R. Rohatgi, D.P. Bartel, J. W. Szostak, J. Am. Chem. Soc. [**118**]{}, 3332 (1996). K. Vetsigian and N. Goldenfeld, Proc. Natl. Acad. Sci. USA [**106**]{}, 215 (2009).
|
---
abstract: 'The simulation of radioactive decays is a common task in Monte-Carlo systems such as Geant4. Usually, a system either uses an approach focusing on the simulations of every individual decay or an approach which simulates a large number of decays with a focus on correct overall statistics. The radioactive decay package presented in this work permits, for the first time, the use of both methods within the same simulation framework — Geant4. The accuracy of the statistical approach in our new package, RDM-extended, and that of the existing Geant4 per-decay implementation (original RDM), which has also been refactored, are verified against the ENSDF database. The new verified package is beneficial for a wide range of experimental scenarios, as it enables researchers to choose the most appropriate approach for their Geant4-based application.'
author:
- 'Steffen Hauf, Markus Kuster, Matej Batič, Zane W. Bell, Dieter H.H. Hoffmann, Philipp M. Lang, Stephan Neff, Maria Grazia Pia, Georg Weidenspointner and Andreas Zoglauer[^1][^2][^3][^4][^5][^6][^7]'
bibliography:
- 'IEEEabrv.bib'
- 'all.bib'
title: Radioactive Decays in Geant4
---
Geant4, Radioactive Decay, Monte-Carlo Simulation, Validation, ENSDF.
Introduction
============
Radioactive decays and the resulting radiation play an important role for many experiments, either as an observable, as a background source, or even as a potential hazard when they are a source of radiation-induced damage for hardware and human beings. Detailed knowledge of the radiation inside and around an experiment and its detectors is thus required for a successful outcome of the experiment and to guarantee the safety of the operator. The increasing complexity of experiments often makes it prohibitively expensive, if not impossible, to completely determine the radiation characteristics and response of an experiment from measurements alone. In order to circumvent these limitations, it has become increasingly important to estimate an experiment’s radiation and response characteristics with the help of computer simulations.
General-purpose Monte-Carlo simulation codes either focus on the correct simulation of individual decays (e.g., Geant4 [@Agostinelli2003250; @2006ITNS...53..270A], see Section \[sec:problem\_domain\]) or the statistical outcome of many decays (e.g., MCNP [@MCNP; @MCNPX] and FLUKA [@2001amcr.conf..955F]).
Whereas the first approach may be inefficient if the individual decay is not of interest, the latter approach does not allow for the physically correct simulation of an individual decay and its associated effects. General purpose Monte Carlo codes would benefit from the capability of providing both approaches in the same environment, in response to the simulation requirements of different experimental scenarios.
In the following a software package for the simulation of radioactive decay, which realizes both approaches for Geant4, is presented. This package includes a refactored implementation of the existing Geant4 per-decay approach [@TruscottG4], and extends the functionality of Geant4 radioactive decay simulation by a novel implementation based on a statistical approach. It is based on the ENSDF (Evaluated Nuclear Structure Data File) data library [@ENSDF], which was chosen due to its widespread usage in the nuclear science community.
This paper reports on the verification of both implemented approaches against a large set of evaluated data. To the best of the authors’ knowledge, such a thorough verification of Geant4 radioactive decay simulation has not yet been documented in the literature. The experimental validation of the software for both approaches is reported on in a separate paper [@RadDecay2012_2].
Radioactive Decay Physics {#sec:physics}
=========================
Radioactive decay is a physical process where an atomic nucleus of an unstable atom transmutes into a lower energy state by spontaneous emission of ionizing radiation. The process does not require external interactions to occur. It results from either nucleus-internal processes or interactions of the nucleus with (inner) shell electrons. A brief overview of the main physics of radioactive decay is summarized here to facilitate the comprehension of the functionality of the software described in this paper.
Different types of radioactive decay are commonly identified according to the type of emitted particles.
- During an $\mathrm{\alpha}$-decay a He–nucleus is emitted from the parent nucleus. This results in a daughter nucleus with two fewer protons and two fewer neutrons than the parent nucleus.
- The $\mathrm{\beta^{-}}$-decay is a weak process during which a neutron is converted into a proton. An electron and anti-neutrino are emitted by the parent nucleus: consequently, the atomic number of the daughter nucleus increases by one and the atomic mass number stays constant. The electron and anti-neutrino share the energy released during the decay. Since both particles are not bound in their final state, their energy distribution follows a continuous spectrum.
- During a $\mathrm{\beta^{+}}$-decay a bound proton of a nucleus is converted into a neutron. A positron and a neutrino are emitted by the parent nucleus; the atomic number decreases by one and the atomic mass number stays constant. Similar to $\mathrm{\beta^{-}}$-decays, both particles are not bound in their final state, and accordingly, their energy distribution follows a continuous spectrum.
- If a daughter nucleus is left in an excited state, after a transmutation by the previously mentioned decay types, it can deexcite by emitting $\mathrm{\gamma}$-radiation. In case the excited daughter state is a long-lived (metastable) state, its deexcitation is called isomeric transition, which will also result in $\mathrm{\gamma}$-radiation. In both cases the atomic number and atomic mass number remain unchanged.
- During an electron capture, the parent nucleus absorbs an inner shell electron (usually K- and L-shell electrons) and simultaneously emits a neutrino. During this process, which is also called inverse $\mathrm{\beta}$-decay, a proton is transmuted into a neutron, thus the atomic number decreases by one and the atomic mass number stays constant. In contrast to a $\mathrm{\beta}$-decay, an electron capture is a two-body decay, resulting in a discrete neutrino energy.
- As an alternative process to $\mathrm{\gamma}$-emission, an excited nucleus can return to its ground state by transferring its excitation energy to one of the lower shell electrons of the atom. This process is called internal conversion, and results in the emission of an electron by the atom, leaving the atom in an excited state. The electron carries a discrete fraction of the decay energy, and by this is distinct from $\mathrm{\beta}$-particles with continuous energy spectra. As with $\mathrm{\gamma}$-decays, no transmutation of the nucleus takes place, and both the atomic number and atomic mass number remain unchanged.
Radioactive decay is a stochastic process. The time at which a given unstable atom decays is not predetermined; instead decays occur with a certain probability. In consequence, experiments will measure statistical observables such as the amount of ionizing radiation of a certain type and energy emitted within a given time period.
Due to practical impossibility of calculating all relevant parameters from theory, the simulation of radioactive decay physics for a large number of decays or decay chains relies on the usage of empirical or pre-calculated data.
Experimental Scenarios {#sec:exp_req}
======================
Many experiments measure the time-accumulated statistical distributions of observables such as energy, type, momentum and timing of radiation resulting from radioactive decays. Often the radioactive decay products are not the intended observable but contribute to the experiment’s or application’s measured data as background.
Radioactive decay physics modeling plays an important role in various fields; the overview summarized here is not intended to be exhaustive.
Measurement and analysis of the properties of decay chains of naturally abundant radioactive isotopes is commonly performed in material sciences, radiation safety and nuclear proliferation monitoring. Radioactive decay modeling is also of interest to experimental scenarios involving the measurement of material properties after irradiation: for instance, of a sample irradiated by a neutron beam.
The study of activation and build-up of radioactive nuclei is relevant to various applicative domains: nuclear reactors (fission and fusion), particle accelerators or intense light sources, where the statistical effect of many decays and activations is relevant. It also concerns space-borne X-ray and $\mathrm{\gamma}$-ray instruments: in these scenarios meta-stable states usually must be accounted for, and the statistical effect of many decays and activations is relevant. Additionally, individual decays contribute to the prompt instrument background. The estimation of the in-orbit cosmic-ray induced background is important for space-based detectors to distinguish individual radioactive emission from intended observables.
Low background astro-particle physics experiments are concerned with the influence of the cosmic-ray induced background and natural radioactivity: here accurate simulation of the individual decays’ spatial and temporal distribution can be important.
Foundations for a Radioactive Decay Simulation
==============================================
Requirements {#sec:problem_domain}
------------
The simulation of radioactive decays consists of the task of decaying an unstable nucleus and generating the resulting products.
The decay of the parent nucleus should proceed according to the physical parameters governing it: decay type, initial excitation and half-life time. A daughter nucleus should be produced, with the physical properties resulting from the decay: atomic number and mass determined by the decay type, excitation energy and kinetics determined by parent kinetics and decay kinetics. Secondary particles and radiation associated with the decay (e.g. $\mathrm{\beta}$-, $\mathrm{\gamma}$- and $\mathrm{\alpha}$-emission, neutrinos and conversion electrons) should be generated.
The software should handle the deexcitation of the daughter atom, involving the production of $\mathrm{\gamma}$- and X-rays, and of Auger-electrons.
Theoretical calculations of the required parameters are not practically feasible in the course of the simulation; therefore the algorithm must use empirical or pre-calculated data.
Radioactive decays of nuclei are often associated with prior activation of stable nuclei of a given material; examples for such applications include shielding analysis and radiation safety analysis. This scenario requires the ability of dealing with activation, and thus replenishment of nuclei, within the simulation. Meta-stable states and isomeric transitions should be taken into account.
In addition to these functional [@6146379], physics-induced requirements, the software should take into account non-functional ones, which derive from the experimental context of the simulation: the possibility to efficiently simulate a large number of decays, when the physical accuracy of the individual decay is of less importance, but overall statistics are relevant; the possibility of efficiently simulating decay chains, when intermediate products in a chain are of lesser interest; the provision of user input.
If radioactive decay simulation occurs in the context of a more general Monte Carlo simulation system, the software responsible for the radioactive decay process should interact with other components of the system.
Problem Domain Analysis {#sec:probl-doma-analys}
-----------------------
Software objects with specialized responsibilities collaborate to satisfy the requirements mentioned in Section \[sec:problem\_domain\]. For the code presented in this work the division of responsibilities is as follows:
- data management
- sampling of the (discrete) emission resulting from the individual decay and generation of the daughter nucleus
- calculation of the $\mathrm{\beta}$-emission spectrum,
- calculation of the number of nuclei within a decay chain — which may include activation
- the user interface
- the interface with the Monte-Carlo code
The ENSDF Data
==============
The sampling of radioactive decays will usually rely on the use of empirical or pre-calculated data. The Evaluated Nuclear Structure Datafile (ENSDF) [@ENSDF] is one such collection of data, which for instance contains information on half-life times, decay types, branching ratios, emission energies and transition types. It is maintained by the National Nuclear Data Center (NNDC) and distributed, amongst others, by the International Atomic Energy Agency (IAEA).
ENSDF is an evaluated library, i.e. it contains data from experiments and theoretical calculations which are recommended for use as a reference after a critical analysis of uncertainties, inter- and extrapolation methods and underlying models has been performed. IAEA defines it as the master library for evaluated experimental structure and decay data [@ensdf_iaea]. Other specialized libraries and bibliographic databases, such as NuDAT [@sonzogni:574], CINDA [@cinda] and NSR [@Pritychenko2011213], exist as well, but are commonly derived from or related to ENSDF. Therefore ENSDF is frequently considered the de-facto standard for nuclear structure and radiation data.
For $\mathrm{\gamma}$-ray intensities and energies the ENSDF data usually consists of evaluated measurements. Conversion electron intensities and energies are derived from theory.
However, the data present in ENSDF are not sufficient to calculate all atomic deexcitation emissions. Specifically, data on electron binding energies, fluorescence and Auger-electron yields are necessary to calculate the respective intensities. To mitigate this problem, analysis programs distributed with ENSDF, such as [RADLST]{} [@radlist], are supplied with the necessary information taken from data of Bearden and Burr [@RevModPhys.39.125] and Bambynek *et al.* [@1972RvMP...44..716B]. In order to stay consistent with these ENSDF-related programs, the aforementioned data are also used to derive quantities for the radioactive decay database of the RDM-extended package and the verification data used in this work.
Radioactive Decay in Monte Carlo Codes {#sec:MCCodes}
======================================
Models for the simulation of radioactive decays exist in most general-purpose Monte-Carlo codes.
MCNP(X) [@MCNP; @MCNPX] does not include a full radioactive decay simulation by default except for the generation of delayed $\mathrm{\gamma}$-rays resulting from decays sampled from MCNP’s photon data library (phtlib). Instead MCNP can be linked via scripts to specialized codes such as ORIGEN2 [@Croff:Origen2] or CINDER90 [@CINDER90; @2005AIPC..769..195G]. Both of these use their own data libraries. These codes are generally used to model reactor fuel cycles and accelerator induced transmutations, but also provide functionality for simulating radioactive decays and decay chains. Due to the codes’ nature, they include replenishment through activation. The $\mathrm{\alpha}$- and $\mathrm{\beta}$-emission needs to be generated from tabulated user input. As a result, MCNP is specialized on the simulation of many decays and the resulting statistics.
FLUKA [@2001amcr.conf..955F] generates and transports $\mathrm{\beta}$- and $\mathrm{\gamma}$-radiation, but only started including $\mathrm{\alpha}$-radiation in its latest release. Decay chains are possible and include replenishment through activation. FLUKA uses its own data libraries, largely based on NNDC (National Nuclear Data Center) data and thus ENSDF. Similar to MCNP, the emphasis lies on the simulation of a large number of decays.
In Geant4 [@Agostinelli2003250; @2006ITNS...53..270A] radioactive decays are treated on a per-decay level, based on data taken from ENSDF [@ENSDF]. Decay chains including activation are possible and produce the associated decay emission. The $\mathrm{\alpha}$- and $\mathrm{\beta}$-emission are sampled from the decay database. Deexcitation radiation of the daughter nucleus is not produced by the radioactive decay simulation itself, but by other physics processes included in Geant4, which use their respective databases. The emphasis lies on the per-decay simulation, not on the sampling of a large number of decays.
None of the above mentioned Monte-Carlo codes allows the simulation of either individual decays or a statistical treatment of many decays in the same software environment.
Radioactive Decay in Geant4 {#sec:geant4_decay}
===========================
A package for the simulation of radioactive decays [@TruscottG4], [@896281] has been available in Geant4 since version 2.0, where it was named [*radiative\_decay*]{}. Since Geant4 version 6.0 it has been named [*radioactive\_decay*]{}, although it is conventionally known as the Geant4 RDM (Radioactive Decay Module). This code was originally developed by P. Truscott and F. Lei; it implements per-decay sampling.
The following discussion is based on the radioactive decay code of Geant4 9.4p04, but is also pertinent to subsequent versions 9.5 and 9.6, the latter one being current at the time of submission of this paper. In these latter versions the problem of not producing fluorescence emission in case of decays other than electron capture has been addressed and the handling of forbidden $\mathrm{\beta}$-decays has been added, but all other features and problems of the implementation mentioned in this work are still valid.
A Unified Modeling Language (UML)[@Rumbaugh2004] class diagram of the RDM code in Geant4 9.4p04 is shown in Fig. \[fig:class\_old\]. It shows the cooperation between the different classes in the code. This diagram is supplemented by the activity diagram shown in Fig. \[fig:activity\_old\]. The two diagrams highlight problems [@Fowler:424198] inherent in the code’s design:
- Since each decay type is defined by distinctive physics, it would make sense to implement the individual types as separate objects. Instead, in the original Geant4 RDM package all decay types are implemented together in the [*G4NuclearDecayChannel*]{} class. The decay type classes merely provide an interface to this class with decay type-dependent initialization parameters. This complicates unit tests of individual components. Additionally, a much larger amount of code has to be checked if, for instance, an error is found for one decay type.
- Whereas the decay physics for each type is distinct, the interface to each type is similar: all decay type objects should have a method to produce decay emission and a daughter nucleus. Such an interface could be provided by a common base class. In the existing design the [*G4RadioactiveDecay*]{} class needs to know the implementation and interface details of each individual decay type.
- Objects should have one specific responsibility, e.g. the interface to the data library. This is not the case: the [*G4RadioactiveDecay*]{} class is responsible for initializing and loading values from the data libraries, initializing the decay types and the variance reduction and decay chain handling. Such a design again enlarges the fraction of code which needs to be checked for a specific error or maintained for the update of a specific responsibility. Additionally, due to inadequate domain decomposition, two distinct responsibilities –- the simulation of radioactive decay and event biasing -– are mixed in the same class.
- In case of the $\mathrm{\beta}$-Fermi-function implementation the interface is implementation-dependent. Changing the algorithm may thus also involve changing the interface — and as a result all other code parts depending on this class.
{width="6.5in"}
{width="7.in"}
A new package, named RDM-extended, has been developed to address existing issues of Geant4 RDM, and to extend and improve the capabilities of radioactive decay simulation in a Geant4-based environment.
The software design of the RDM-extended package follows an object-oriented programming approach with clear responsibility definitions. For this design the relevant entities and requirements for radioactive decay physics have been identified. Decay type-dependent approaches have been considered alongside common tasks for all decay types. The design also takes into account that two sampling methods are to be handled within a common framework, and that the external classes the code depends on may be subject to interface changes.
The UML diagrams in Fig. \[fig:class\_new\] and \[fig:activity\_new\] document that the responsibilities of objects are clearly defined and that functionality is neither duplicated nor aggregated into non-specialized classes.
An example for implementing functionality only once is the class [*G4RandomDirection*]{}, which provides functionality for sampling random particle momentum vectors to the classes modeling the individual decay types and the statistical sampling.
The [*G4RadioactiveDecay*]{} class of the original RDM (Fig. \[fig:class\_old\]) is an example of a non-specialized class: it is responsible for physics simulation, data preparation and decay chain handling. In contrast the [*G4RadioactiveDecay*]{} class of the RDM-extended is a pure management class, which coordinates the interaction of specialized classes for the aforementioned tasks: the different emission classes (physics simulation), [*G4RadioactiveController*]{} (data-management) and [*G4DecayChainSolver*]{} (decay chains).
The RDM-extended package also respects encapsulation rules. Furthermore, physics functionality implemented in the different classes can be combined as needed, thus allowing both sampling approaches to use a common code base for functionality required by both.
The addition of a statistical sampling approach is reflected in the activity diagram: the initial configuration and library data access are common for both design approaches. They differ after a decay channel is chosen: in the per-decay approach fluorescence emission is sampled for electron capture decays by delegating responsibility to the [*G4AtomicDeexcitation*]{} class. At a later point the deexcitation emission of the nucleus is sampled by using the [*G4PhotonEvaporation*]{} class. If the statistical approach is chosen instead, all photon emission and discrete electron emission processes are sampled simultaneously at the end of handling decay physics.
{width="7.in"}
{width="7.in"}
Per-Decay Sampling {#sec:per_decay}
==================
Per-decay sampling is already present in Geant4. This original RDM code provides the required functionality: therefore the general approaches to physical treatment and data handling were preserved, but refactored before their inclusion into the RDM-extended in order to conform to the design discussed in the previous section. This refactored per-decay sampling is consistent with the original RDM. Additionally, small errors in the physical treatment were addressed. As a result the structure of the code is different from the original one, but the inherent functionality is conserved.
Per-decay sampling is based on reprocessed ENSDF data both in the original RDM and the RDM-extended package. For the RDM-extended we reprocessed the data using a parser, which implements the ENSDF format as given in the ENSDF manual [@ENSDF]. The Geant4 classes handling nuclear and atomic deexcitation in the per-decay approach use their own libraries. For the [*G4PhotonEvaporation*]{} class (nuclear deexcitation) this is ENSDF-based, with conversion electron probabilities compiled from [@Band1976433; @Roesel197891; @Hager19681]. The [*G4AtomicDeexcitation*]{} class (atomic deexcitation), which has been previously validated in [@Pia4237413] and [@pia2009validation], uses EADL [@Perkins:236347] data. Concerns about the accuracy of the EADL data have been reported in [@pia2011evaluation].
The electron capture probabilities included in both radioactive decay libraries are not given in ENSDF directly. They need to be calculated using an additional data source, which gives fluorescence yields and binding energy information. For consistency reasons the atomic data file distributed with ENSDF is used for the RDM-extended package, which is based on data by Bearden and Burr and Bambynek [*et al.*]{} [@RevModPhys.39.125; @1972RvMP...44..716B]. This also facilitates comparisons with ENSDF-based online-databases such as NuDat, which use the same data.
Further refactoring is foreseen to use a package for atomic data management exploiting the results of [@2012JPhCS.396b2039B] and an improved package for the simulation of atomic relaxation, which are currently under development. These packages are intended to satisfy requirements common to the simulation of radioactive decay and of electromagnetic interactions.
The radioactive decay database of both codes is supplied in plain text on a per isotope basis.
Energy Conservation
-------------------
For a physically correct outcome energy needs to be conserved for the decay, all decay emissions and when deexciting the daughter nucleus. Nuclear deexcitation is handled by the [*G4PhotonEvaporation*]{} class, which uses a reprocessed ENSDF-based data library different from the radioactive decay class. Since it is not guaranteed that the level-energies in the two data libraries are exactly the same, a possible deviation has to be taken into account. This is done in the original Geant4 RDM on three levels:
- If the level energy passed by the radioactive decay code does not correspond to a tabulated level in the [*G4PhotonEvaporation*]{} library, the nearest level present in the [*G4PhotonEvaporation*]{} database is used to retrieve the possible transitions. The initial deexcitation step occurs from the energy passed by the decay code and will transition to an energy tabulated in the deexcitation library. The energy of the first $\mathrm{\gamma}$-ray will thus not be in accordance with the tabulated evaporation data, but deviate by the difference between the tabulated radioactive decay and evaporation library values. Further transitions will then result in emission at energies in accordance with the [*G4PhotonEvaporation*]{} data.
- For all transitions along a deexcitation chain it is checked if the level resulting from the transition is within a tolerance of $1\,\mathrm{keV}$ of the ground state. If this is the case, the photon energy is set to the energy of the excited level and the nucleus is deexcited to the ground state. Again energy is conserved, but the energy of the emitted photon may not be in accordance with the tabulated data.
- The final state energy is passed back to the radioactive decay code. In case this energy is not $0$, but below $1\,\mathrm{keV}$, the daughter’s excitation energy is set to $0\,\mathrm{keV}$. Here energy is not conserved. Otherwise the decay code outputs an excited (possibly meta-stable) daughter nucleus as defined by the data library.
The first two conservation treatments are necessary if slightly divergent data libraries exist. They are handled internally by the [*G4PhotonEvaporation*]{} class for all processes which delegate to it, and have thus not been altered as part of the radioactive decay code development. Rectifying these divergencies would require a consolidation of the Geant4 data libraries, which exceeds the scope of this work. The last approach to energy conservation is physically unjustifiable and accordingly has been corrected in the refactored implementation. In case of an energy mismatch, the energy of an excited nucleus is deposited in the geometrical volume it is located in. Whereas this does not adress the library inconsistency, which leads to the energy mismatch, it does conserve energy and is accordingly seen as the preferable solution.
Momentum Conservation and Recoil of Nucleus
-------------------------------------------
Like energy, momentum should be conserved during the complete decay simulation. In the per-decay approach nuclear deexcitation is delegated to [*G4PhotonEvaporation*]{}, which also generates the momenta of the $\mathrm{\gamma}$-rays and conversion-electrons in a deexcitation cascade. The radioactive decay simulation is then responsible to account for the resulting recoil of the daughter nucleus. Similarly, the momenta of any fluorescence and Auger-emission are sampled by the [*G4AtomicDeexcitation*]{} class and should also be taken into account as vectorial quantities. This is not the case in the original Geant4 RDM code: here the difference of all produced emissions’ summed kinetic energies and the binding energy of the innermost vacated shell are added to the kinetic energy of the daughter nucleus. Whereas the computation of scalar values might result in a marginal performance increase, it neglects that the emission is not unidirectional, but isotropic. A full vectorial treatment is the physically accurate solution and is thus implemented in the RDM-extended package.
Statistical Sampling {#sec:stat}
====================
The statistical sampling approach discussed in this section is a novelty for Geant4-based radioactive decay simulation. The necessity for such an approach has become evident as a consequence of the experimental requirements identified in Section \[sec:exp\_req\]. Statistical sampling in this context means that the full simulation of an individual decay is considered of lesser importance as long as the emission and nuclei produced by many decays on average lead to a physically correct result.
Because all decay emission is treated as independent of each other and the intensity of each emission is known, sampling generally reduces to the problem of repeated random number generation within the intensity range of $0\ldots 1$. Additionally, one has to take into account that intensities $I_\mathrm{{em}}>1$ may exist, if both nuclear and atomic deexcitation result in radiation at the same energy. In the latter case at least $I_\mathrm{{em}}\,\pmod 1$ emissions will be generated for every decay.
This approach does not require to take allowed or forbidden level transitions into account during nuclear deexcitation since the order of occurrence of transitions is considered irrelevant, as long as for a large enough number of transitions, each occurs with the correct probability, in result yielding the correct intensity.
These intensities are included in ENSDF alongside normalization information and as such all required decay information may be retrieved from a single consistent library. Accordingly, energy mismatches between libraries do not occur and no interdependencies between different physics classes exist. As a consequence, an alteration of tabulated library values will manifest itself in a straight-forward and immediate fashion in the simulated decay intensities and energies.
The conservation of energy and momentum is simplified, as the information of momenta and energy of all decay emissions is known to the radioactive decay simulation. Accordingly, it can be taken into account in a physically meaningful fashion. In consequence, statistical sampling is a more efficient, straight-forward, ENSDF-consistent and performant approach for simulating radioactive decays for the majority of experimental applications which do not require knowledge of individual decays.
Sampling-Method-Independent Functionality in the RDM-Extended Radioactive Decay Code
====================================================================================
In the previous two sections a distinction between the per-decay and statistical sampling approach was made. For a complete solution to radioactive decay physics, in accordance with the requirements, sampling schemes for $\mathrm{\beta}$-emission and decay chains are necessary. These are approach-independent as they occur in both sampling scenarios.
$\mathrm{\beta}$-Fermi-Function - Sampling of the Continuous $\mathrm{\beta}$-Spectrum {#sec:bf_em}
--------------------------------------------------------------------------------------
$\mathrm{\beta}$-decay is an unbound three-body decay and the emitted radiation will thus have a continuous spectrum. This spectrum can be sampled using the $\mathrm{\beta}$-Fermi-function, which also takes corrections resulting from the interaction of the charged nucleus with the $\mathrm{\beta}$-particle into account.
The parameters passed to the $\mathrm{\beta}$-Fermi function should be physics-relevant, but code-independent. In particular this means that, instead of passing a binning scheme dependent energy, as is done in the original Geant4 RDM implementation, the physical parameters $Z$ and endpoint energy $E_0$, as well as the decay type ($\mathrm{\beta^{-}}$ or $\mathrm{\beta^{+}}$) and optionally forbiddenness, are passed in the RDM-extended implementation. This parameter set is sufficient for many Fermi-function approximations, like to those summarized by Venkataramaiah [*et al.*]{} in [@Venkataramaiah]. Currently, the computationally most performant approximation given therein is used in the RDM-extended package for calculating the Fermi correction factor $F(Z,E)$ for an isotope with atomic number $Z$ with the total energy $E$(given in MeV): $$F(Z,E) = [A+B/(E-1)]^{\frac{1}{2}}.$$ The constants $A$ and $B$ were determined by Venkataramaiah [ *et al.*]{} through linear regression using the data from Rose [@rose1955], and were found to satisfy:
[A=]{} 1+a\_[0]{}(b\_[0]{}Z) & for $Z\ge16$\
7.310\^[-2]{} Z+9.410\^[-1]{} & for $Z<16$
with $$\begin{aligned}
\nonumber a_{0} & = & 404.56\times 10^{-3} \\
\nonumber b_{0} & = & 73.184\times 10^{-3} \end{aligned}$$ and $$B = a\,Z\,\mathrm{exp}(bZ)$$ with
[a=]{} 5.546510\^[-3]{} & for $Z\le56$\
1.227710\^[-3]{} & for $Z>56$
[b=]{} 76.92910\^[-3]{} & for $Z\le56$\
101.2210\^[-3]{} & for $Z>56$.
The correction factor $F(Z,E)$ is then input into the $\mathrm{\beta}$-Fermi function $$N(p)\,dp=F(Z,E)\, p^2 \,(E_0-E)^2 \,dp,$$ where $E_0$ is the end-point energy of the $\mathrm{\beta}$-spectrum, obtained from the tabulated radioactive decay data library, $E$ is the total energy of a $\mathrm{\beta}$-particle with momentum $p$. If a given parameter set is computed for the first time, a tabulated energy distribution is calculated, from which the $\mathrm{\beta}$-particle energy is drawn. Future occurrences of the same parameter set will draw particle energies directly from this distribution, thereby minimizing processing time. Venkataramaiah [*et al.*]{} found this approximation to be accurate within a one-percent margin of error, when compared to the tabulated values of Rose [@rose1955]. Because the interface of the $\mathrm{\beta}$-Fermi-function class is independent of the model implemented therein, it is easily possible for the user to substitute the RDM-extended’s $\mathrm{\beta}$-Fermi-function approximation, by one which is has better accuracy for a certain isotope.
Decay Chains and Activation {#sec:bateman}
---------------------------
Isotopes resulting from a radioactive decay are often unstable themselves. This leads to chains of subsequent decays until a stable daughter product is reached. Often it is desirable to only fully simulate those isotopes in a chain which are of interest as observables, either due to their half life or due to the radiation emitted when they decay. In such a case a full Monte-Carlo simulation of every decay is very inefficient. The sampling of interesting isotopes only requires determining the number of nuclei of a given species present in a chain at a specific time.
Additionally, radioactive isotopes may be created by nuclear activation, often as a result of proton or neutron collisions with a nucleus. While the collisions and resulting activation is handled by the hadronic processes of Geant4 [@2006AIPC..867..479W], bookkeeping of the activation buildup and the decay of the created unstable nuclei are tasks for the radioactive decay simulation. In the RDM-extended the activation simulation is also capable of altering the material composition on a per-volume basis to the isotope composition present at discrete user-defined time steps.
For a system of $n$ nuclei, where the $i$th nuclei decays into the $(i+1)$th nuclei of the chain, this calculation can be done by solving a system of coupled differential equations with the general form:
$$\begin{aligned}
\label{eqn:dgl2}
\frac{dN_{1}(t)}{dt} & = & -k_{1} N_{1}(t) \nonumber \\
\frac{dN_{2}(t)}{dt} & = & \lambda_{1}N_{1}(t)-k_{2} N_{2}(t) \nonumber \\
& \vdots & \nonumber \\
\frac{dN_{n}(t)}{dt} & = & \lambda_{n-1}N_{n-1}(t)-k_{n} N_{n}(t)\end{aligned}$$
with $N_{i}$ being the quantity of the $i$th nucleus at time $t$ and $k_{i}=\lambda_{i}+\alpha_{i}$ containing information on the decay constant $\lambda_{i}$ and a constant particle number dependent activation rate $\alpha_{i}$.
A general approach for solving equation (\[eqn:dgl2\]) for any number of products was first derived by Bateman [@Bateman1910] and the above equations have as such become known as the Bateman equations.
One of the main disadvantages of the original Bateman solution is that it does not consider branching, i.e. if a parent nucleus can decay to different daughter nuclei via different decay types. Many computational algorithms using Bateman’s approach or a derivative thereof exist, with the one implemented in the original radioactive decay code [@TruscottG4], being just one example which overcomes the branching limitation and includes activation. These algorithms are generally computationally expensive, in the sense that they require recursive loops for each nucleus in a decay chain, but may achieve good overall performance, if the calculations can be reused.
An alternative is the algebraic approach derived by M. Amaku [*et al.*]{} in [@2010CoPhC.181...21A], building upon work by R. J. Onega [@onega:1019], D. Pressyanov [@2002AmJPh..70..444P], L. Moral and A. F. Pacheco [@2003AmJPh..71..684M], as well as T. M. Senkov [@2004AmJPh..72..410S] and D. Yuan and W. Kernan [@2007JAP...101i4907Y]. In this approach the properties governing the decay chain are written into a matrix in Hessenberg-form $$\Lambda = \begin{bmatrix}
-k_{1} & 0 & 0 & 0 \\
k_{1} & -k_{2} & 0 & 0 \\
0 & \ddots & \ddots & 0 \\
0 & 0 & k_{n-1} & k_{n} \\
\end{bmatrix}$$ and a vector $\mathbf{N}(t)$ with $$\mathbf{N}(t) = \begin{bmatrix}
N_{1}(t) \\
N_{2}(t) \\
\vdots \\
N_{i}(t) \\
\end{bmatrix}.$$ The system of equations (\[eqn:dgl2\]) may then be written as $$\frac{d\mathbf{N}}{dt} = \Lambda\mathbf{N}.$$ This original algebraic approach was introduced by Onega in 1969 and extended by Yuan and Kernan as well as Semkow to include branching by modifying $\Lambda$ to $$\Lambda = \begin{bmatrix}
\Lambda_{11} & & & & \\
\Lambda_{21} & \Lambda_{22} & & & \\
\vdots & \vdots & & \hspace{-1.5cm} \ddots & & \\
\Lambda_{i1} & \Lambda_{i2} &\dots & \Lambda_{i,i} \\
\vdots & \vdots & & \vdots & \hspace{-1.cm}\ddots\\
\Lambda_{n1} & \Lambda_{n2} & \dots &\Lambda_{ni} & \Lambda_{nn} \\
\end{bmatrix}$$ with $$\label{eqn:lambdBranch}
\Lambda_{ij} = k_{i-1}b_{ij}$$ for $i>j$ and $$\Lambda_{ii} = -k_{i}.$$ In equation (\[eqn:lambdBranch\]) $b_{ij}$ denotes the branching ratio from the $j$th to the $i$th component of the decay chain with $\sum^{n}_{i=j+1} b_{ij} = 1$. In computational practice $\Lambda$ can be easily constructed by iterating through the decay chain and taking activation rates into account, if necessary. Because $\Lambda$ is independent of the actual nuclei numbers, it must only be constructed once for each set of nuclei and activations characterizing a given chain. Calculations using different nuclei numbers can reuse these initial matrices as needed. Using the above definitions, the number of nuclei of each species in the decay chain is then given by $$\mathbf{N}(t) = e^{\Lambda t}\mathbf{N}(0)$$ which can be rewritten to $$\label{eqn:sol}
\mathbf{N}(t) = C\,e^{\Lambda_{d}t}\,C^{-1}$$ as described by Onega. In equation (\[eqn:sol\]) $C$ is a square matrix with the $n$th column consisting of the $n$th eigenvector of $\Lambda$, so that $C =
[\mathbf{c}_{1},\mathbf{c}_{2},\dots,\mathbf{c}_{n}]$. $C^{-1}$ is its inverse and $\Lambda_{d}$ a diagonal matrix with the elements $\Lambda_{d,nn}$ being the $n$th eigenvalue of $\Lambda$.
M. Amaku [*et al.*]{} then derive an algorithmic approach for calculating the matrices $C$, $C^{-1}$ and $\Lambda_{d}$, which is computationally less expensive and more accurate than a general approach of numerically calculating the matrix elements. The elements of $C=[c_{ij}]$ can be calculated with the recurrence expression $$\label{eqn:ccalc}
c_{ij} = \frac{\sum^{i-1}_{k=j} \Lambda_{ik}c_{kj}}{\Lambda_{jj} - \Lambda_{ii}}$$ for $i=2,\dots,n$, $j=1,\dots,i-1$ and $c_{jj}=1$. Similarly the elements of $C^{-1}=[c^{-1}_{ij}]$ are given by $$\label{eqn:cinvcalc}
c^{-1}_{ij} = \sum^{i-1}_{k=j} c_{ik}b_{kj}$$ for $i>j$ and $b_{jj}=1$. Using $C$ and $C^{-1}$, $\Lambda_{d}$ is given by $$\label{eqn:lambda}
\Lambda_{d} = C^{-1}\,\Lambda\,C$$ resulting in a general form for \[eqn:sol\] of $$\label{eqn:batemanfinal}
\mathbf{N}(t) = C \begin{bmatrix}
e^{-\Lambda_{d,11}t} & & & \\
& e^{-\Lambda_{d,22}t} & & \\
& & \ddots & \\
& & & e^{-\Lambda_{d,nn}t}
\end{bmatrix} C^{-1} \mathbf{N}(0).$$ In the RDM-extended code the replenishment and generation of unstable isotopes through activation is calculated using the above equations on a per-volume level. In a first simulation run, a bookkeeping class [*G4ActivationBookkeeping*]{} keeps track of all activations occurring in a given volume and calculates the new material and isotope composition of the volume at a given time. In a second run, the material properties are then updated within the geometry, and the volumes are set as radioactive background sources, with the calculated activity via the [*G4RadDecayVolumeBookkeeping*]{} class. This class samples the radioactive background emission, which should be generated alongside a primary particle and pushes it on the event stack. In this way minimal user intervention is required.
Verification of Sampling Methods {#sec:verification}
================================
The radioactive code implementations have been verified for consistency with ENSDF data. Since ENSDF is established in the experimental community as an authoritative reference, comparison of Monte Carlo models against it provides valuable information for the users of these codes. The results presented here are chosen to highlight the physics-performance improvements of the statistical sampling in the RDM-extended package when compared to the per-decay sampling of the original Geant4 RDM code. Accordingly, the refactored per-decay code, which has been verified to produce equivalent results to the approach used in the original Geant4 RDM, is not detailed further.
In addition to the $\mathrm{\gamma}$-ray and conversion electron emission directly given in ENSDF, the verification of intensities and energies of fluorescence and Auger-emission with the Bambynek [*et al.*]{} data [@1972RvMP...44..716B] has been included.
Simulated Data Production {#sec:prep}
-------------------------
The Geant4 simulations consisted of $10^{6}$ decays of an unstable nucleus in an otherwise empty geometry. Any resulting unstable daughter nuclei were not decayed further. The kinetic energy of radiation and particles resulting from these decays was recorded separately for each radiation type ($\mathrm{\alpha}$, $\mathrm{\beta}$, $\mathrm{\gamma}$, non-$\mathrm{\beta}$ electrons) into two histograms: the first histogram ranging from $0\text{---}30\,\mathrm{keV}$ with a binning size of $0.05\,\mathrm{keV}$, the second histogram ranging from $0\text{---}30\,\mathrm{MeV}$ with a binning size of $0.2\,\mathrm{keV}$. The two binning schemes were chosen in order to properly distinguish discrete radiation in the X-ray and $\mathrm{\gamma}$–energy regime.
Using the above approach, data for $2910$ parent excitation levels of isotopes were simulated using the original RDM code with radioactive decay database version 3.3 and the per-decay simulation. The statistical approach simulations included $3040$ parent isotopes and excitation levels, i.e. $130$ more than the sample used for the production with per-decay sampling. The numbers differ due the different versions of ENSDF data and different parsers used for reprocessing. For both sets of simulations $\mathrm{\beta}$-emission was distinguished from Auger- and conversion electron emission.
Evaluated Data Preparation
--------------------------
The verification data were extracted from ENSDF into a tabulated form suitable for further analysis using the RADLIST [@radlist] program. By using this ENSDF-supplied parser it was assured that no errors were introduced into the evaluated data during the automated extraction.
Data analysis
-------------
The automated, comparative analysis takes the level and particle energies from the evaluated data as an initial energy estimate $E_{\mathrm{0,level}}$ or $E_{\mathrm{0,particle}}$ for where emission may be present in the simulated data. The simulated data histograms are then scanned in an interval of $E_{0}\pm0.5\,\mathrm{keV}$ at energies below $30\,\mathrm{keV}$ and $E_{0}\pm5.0\,\mathrm{keV}$ at higher energies for emission events. Data outside these windows was considered not to belong to the currently compared emission energy.
In case of discrete emission contained in a single bin the intensity of the emission is given by $$\mathrm{Intensity} = \frac{\mathrm{Events\;in\;bin}}{\mathrm{Number\;of\;simulated\;decays}}.$$ The energy uncertainty is determined by the bin size. The intensity uncertainty is $\Delta{I}=\sqrt{{N}}/N_{\mathrm{sim}}$ with the event number ${N}$ per bin and the simulated number of decays $N_{\mathrm{sim}}$.
Should the emission be distributed into (multiple) neighboring bins, which is possible due to recoil from previous emissions transferred onto the emitting nucleus, the intensity is calculated from the total number of the events in these bins. The evaluated data can indicate that emissions at a different energy, but in proximity to the energy currently being processed may have also contributed to the number of simulated events. In this case the summed intensities of all emissions in neighboring bins are used for further comparisons. The energy position of these emissions is then set to the median energy of the events, accordingly its uncertainty is the median deviation.
![Percentage of simulated emission within a given deviation from the simulation input.[]{data-label="fig:idev_parser_comp"}](hauf05){width="3.5in"}
These steps provide energy and intensities of the radiation from the evaluated and simulated data. Absolute and relative energy and intensity deviations can then be calculated. In this way overall comparisons (see Section \[sec:gen\_overview\]) and comparisons of individual isotopes, such as those measured in [@RadDecay2012_2], are possible.
In order to assess the consistency of the simulation model, a complete set of simulations using the statistical approach was compared to the simulations’ input data. Ideally, one would not expect any energy deviations at all and the intensity deviations should be compatible with the statistical error of the simulation. In practice the simulations’ output is binned spectra, in order to keep total data amounts at a manageable level. Due to this, in rare cases the emission at nearby energies may not be distinguished properly, which in turn leads to intensity deviations. As is apparent from Fig. \[fig:idev\_parser\_comp\], $99\%$ of all simulated emissions are within an intensity deviation of $3\%$ of the simulation input.
Results: Consistency of the Radioactive Decay Codes with ENSDF {#sec:gen_overview}
==============================================================
In the following we focus upon identifying global trends and exploring the regions in which the radioactive decay simulations gives reliable results. A validation with measured data can be found in [@RadDecay2012_2].
For the comparisons presented in the following, knowledge of the comparisons’ uncertainties is important. Fig. \[fig:error\_sum\] shows a compilation of the intensity deviation uncertainties, which depend on the uncertainty of the evaluated intensity for a given level and the statistical uncertainty of the simulated data. As is apparent from the figure, even at low intensities the uncertainties are in a range below $10\%$ for $99\%$ of the data points.
![Distribution of the relative error of the observed deviations between simulated and evaluated radiation intensities with respect to the level intensity. The contour levels correspond to the percentile of values at given deviation error with respect to the number of values at a given level intensity. At higher intensities values are more sparse resulting in the observed “single box” contours.[]{data-label="fig:error_sum"}](hauf06){width="3.5in"}
Intensity Deviations
--------------------
In order to help readers quickly identify the simulations’ intensity discrepancy for isotopes occurring in their simulation, it was chosen to display the results as nuclide charts (a colored online version is available).
Fig. \[fig:nuclide\_high\_energy\] shows a comparison of the intensity discrepancy of the original Geant4 RDM per-decay sampling alongside the discrepancy of the statistical approach of the RDM-extended when compared to ENSDF data. As is apparent from the figures, both sampling methods reproduce the $\mathrm{\gamma}$- and $\mathrm{\alpha}$-emission intensities given in ENSDF within a few percent deviation for the majority of isotopes. Specifically, the mean deviations $(I_{\mathrm{exp}}-I_{\mathrm{sim}})/I_{\mathrm{exp}}$ amount to $(8.57\pm2.22)\% $ for $\mathrm{\gamma}$-rays and $(2.47\pm2.26)\% $ for $\mathrm{\alpha}$-emission when using the per-decay code.
For the statistical approach the deviations are minimal, with $(1.85\pm2.07)\%
$ for $\mathrm{\gamma}$-rays and $(5.61\pm2.62)\% $ for $\mathrm{\alpha}$-emission. The outliers ($>50\%$ discrepancy) in the simulation using the RDM-extended package can be explained by the way the ENSDF data are parsed into the radioactive data library. In cases where multiple datasets describing the same emission exist, the implemented parser is tuned to pick experimentally determined data or, if such a distinction is not possible, use the first data set. The RADLIST program preferably uses theoretical data according to the documentation [@radlist].
It is further apparent from Fig. \[fig:nuclide\_high\_energy\] that the original per-decay code does not reproduce the ENSDF intensities of conversion electrons well. This manifests itself in a mean deviation of $(35.67\pm6.32)\%$ compared to $(9.54\pm0.98)\%$ for the statistical approach.
For the original Geant4 RDM per-decay code the $\mathrm{\gamma}$- and conversion electron intensity deviations must be attributed to the [ *G4PhotonEvaporation*]{} model or more specifically its underlying data library [@Pia4237413]. A comprehensive verification and validation of this process would be beyond the scope of this paper and was thus not undertaken.
[{width="3.5in"}]{}
[{width="3.5in"}]{}
[{width="3.5in"}]{}
{width="3.5in"}
[{width="3.5in"}]{}
[{width="3.5in"}]{}
{width="3.5in"}
For fluorescence and Auger emission the deviation between evaluated ENSDF intensities and those produced by the original RDM per-decay approach are even larger, as is apparent from Fig. \[fig:nuclide\_low\_energy\]. For X-ray emissions the deviation amounts to $(52.64\pm1.97)\%$; for Auger-electron to $(52.57\pm0.96)\%$. Again not the radioactive decay code alone is responsible for these offsets but its interplay with the [*G4AtomicDeexcitation*]{} class and its associated EADL data library. It is interesting to note that the deviations are largest for the isotope on the left of the nuclide chart, i.e. isotopes which decay via electron capture. This substantiates the conclusion that the EADL data library is the source of the deviations, as it is only called by the per-decay approach if electron capture decays need to be sampled (in Geant4 9.4, changed in Geant4 9.5). Again the extended-RDM’s statistical method yields results with smaller deviations, which amount to $(4.09\pm1.78)\%$ for X-ray emission and $(1.34\pm1.16)\%$ for Auger electron emission. This is a more than 10-fold improvement in intensity consistency with respect to the per-decay approach of the original Geant4 RDM. A similar magnitude of deviations is reported in [@pia2011evaluation] for EADL data, in which different binding energy data libraries were compared with reference data. In this work EADL consistently showed the largest deviations.
It should be stressed that the observed intensity deviations for the per-decay approach result mainly from the incoherence of the data libraries involved in the sampling of nuclear deexcitation and fluorescence emission with the ENSDF-database, from which the reference data were derived. For the statistical approach the deviations are naturally smaller, as all radioactive decay data are derived from a single ENSDF-library and supplementary atomic data files.
Energy deviations
-----------------
Most application scenarios will depend on the correct sampling of the energy of the $\mathrm{\gamma}$- or X-ray radiation and that $\mathrm{\alpha}$ particles as well as Auger- and conversion-electrons are emitted at the experimentally determined energies. From the comparisons shown in Fig. \[fig:energy\_energy\] and Fig. \[fig:energy\_low\], one can conclude that at energies in the X-ray and Auger-electron range less than $30\,\mathrm{keV}$, a deviation of less than $0.2\,\mathrm{keV}$ with respect to ENSDF data is to be expected for all radiation, when using the original Geant4 RDM per-decay method. Statistical sampling is again more consistent with ENSDF data. Here the observed energy deviation is less than $0.1\,\mathrm{keV}$ for the majority of emissions. For $\mathrm{\gamma}$-radiation, $\mathrm{\alpha}$-particles and conversion electrons at higher energies, deviations of less than $0.5\,\mathrm{keV}$ are observed with the per-decay approach. The statistical approach shows deviations at or less than the bin size of $0.2\,\mathrm{keV}$.
Computational Performance {#sec:perf}
=========================
Computational performance is a significant aspect for large scale Monte-Carlo simulations. The RDM-extended package was designed keeping this in mind (Section \[sec:bateman\]). The implementation reflects this performance optimization: for instance, it uses hash-maps ([ *std::unordered\_map*]{}) from the [*C++11*]{}-standard [@cpp11], which allow element access times which scale with the number of elements in $O(1)$, instead of $O(log(n))$, as would be the case for traditional [*std::map*]{} containers (see e.g. [@2010arXiv1012.3292H]). In order to estimate the performance gain achievable using this implementation in comparison to the original RDM, the following performance tests were undertaken:
- Decay an isotope $100\,000$ times using the refactored per-decay and the statistical approach.
- Decay a chain of isotopes and retrieve all emission which has occurred from all isotopes in the chain within a sampled time period of $\Delta
t$, which could be, for instance, an experiment’s measurement duration, $10\,000$ times.
- Decay a chain of isotopes and retrieve the emission from a single isotope within the chain occurring within a time period $\Delta t$ $10\,000$ times.
Each of the above tests was repeated $200$ times to ensure that temporary CPU load from the operating system or other processes would not bias the results. For the first test the isotopes $\mathrm{^{22}Na}$($\mathrm{\beta^{+}}$), $\mathrm{^{60}Co}$($\mathrm{\beta^{-}}$), $\mathrm{^{229}Th}$($\mathrm{\alpha}$) and $\mathrm{^{133}Ba}$(EC) were decayed, constituting an example for each of the particle emitting decay types. As an example of a full decay chain, the $\mathrm{^{233}U}$ decay chain shown in Fig. \[fig:decaychain\] was simulated. In order to estimate the performance for different decay chain lengths, different initial nuclei in this chain were chosen.
![The $\mathrm{^{233}U}$ decay chain simulated for performance testing. The initial isotope of the chain was varied from $\mathrm{^{233}U}$ to $\mathrm{^{213}Bi}$ to test different chain lengths.[]{data-label="fig:decaychain"}](hauf22){width="3.in"}
All performance tests were done on a $12$-core [*XEON*]{} machine at $2.93\,\mathrm{GHz}$ running Ubuntu 10.10 [*Maverick Meerkat*]{} and an identical application based on [Geant4 9.4p04]{}, which was unaltered except for the decay code. The [*gcc 4.4.5*]{} compiler with Geant4 standard compiler flags and the extensions of the [*C++11*]{}-standard enabled was used for compilation. The radioactive decay was the only physics simulation process included in the test environment, thus guaranteeing that no other processes, which the involved particles might be subject to during further tracking, would influence the results.
![Computational performance change when using the RDM-extended package with the statistical approach in comparison to the original Geant4 RDM and the per-decay approach. Each simulation consisted of $100\,000$ decays and was repeated $200$ times. Three different sampling techniques are shown: statistical approach, classical approach and classical approach without post-decay fluorescence production. This last approach resembles the original Geant4 RDM implementation.[]{data-label="fig:performance_1"}](hauf23){width="3.5in"}
The relative performance of the RDM-extended statistical sampling method shown in Fig. \[fig:performance\_1\] asserts that this method is – in almost all cases more than $20\,\mathrm{\%}$ – faster than the per-decay approach implemented in the original Geant4 RDM. An exception are isotopes which have a large number of deexcitation emissions such as $\mathrm{^{229}Th}$. Here the linear increase of sampling time with the number of emissions results in a performance penalty.
When comparing the refactored per-decay approach decay emission production, i.e. delegation to [*G4PhotonEvaporation*]{} and [ *G4AtomicDeexcitation*]{}, two scenarios have been distinguished. In the first case, labeled “per-decay” in Fig. \[fig:performance\_1\], the vacant shell index, which may be output by the [*G4PhotonEvaporation*]{} class at the end of deexcitation, is passed to [*G4AtomicDeexcitation*]{} for X-ray and Auger-electron production. The original Geant4 RDM code does not pass the atomic shell index, a scenario also simulated with the RDM-extended package and labeled “per-decay w/o fluor.” in Fig. \[fig:performance\_1\]. It is apparent from the figure, especially for $\mathrm{^{229}Th}$, that the inclusion of [ *G4AtomicDeexcitation*]{} results in a severe performance penalty. If a full treatment of X-ray and Auger-electron emission is needed and simulation performance is critical, it is thus recommended to use statistical sampling, if the applications scenario permits it.
For the decay chain performance comparison shown in Fig. \[fig:performance\_2\] one has to consider that the per-decay approach of the original Geant4 RDM does not take the duration of the time period to be sampled into account. Instead it is up to the user to filter for the relevant emission according to the application scenario. This explains the increase in computing time needed by the RDM-extended between $\mathrm{^{233}U}$ as the initial isotope and $\mathrm{^{229}Th}$. As is shown in Fig. \[fig:decaychain\], thorium has a much shorter half life time than uranium. Regardless of which isotope is taken as the initial one in the chain, the simulated time duration for which decays are sampled stays the same at $3\times10^{13}\,\mathrm{s} =
95120\,\mathrm{yr}$. During this time, much less uranium than thorium will have decayed. Accordingly, when uranium is chosen as an initial isotope, less emission has to be sampled, than is the case for thorium. This reflects itself in the performance increase of $\sim 55\,\mathrm{\%}$.
Similarly, the statistical sampling in the RDM-extended package is faster, if only the emission from selected isotopes in the chain is of interest. In the RDM-extended code only the emissions from these isotopes are actually sampled and passed to tracking, reducing the number of particles which need to be simulated. In the original RDM the user has to filter for the emission of interest after it has been produced by the decay code and has been passed to tracking. This is inefficient and computing time intensive, as can be seen from Fig. \[fig:performance\_2\]. An extreme case for such a scenario is when only the emission of the final isotope in a chain is of interest. This case is documented in Fig. \[fig:performance\_2\] by the two ”end of chain” data values.
![Computational performance of the RDM-extended package and the original RDM when decaying the $\mathrm{^{233}U}$ decay chain. The chain length was varied by setting different initial nuclei. The two data points labeled end of chain are the performance values for when only the emission of the last isotope in the chain is of interest.[]{data-label="fig:performance_2"}](hauf24){width="3.5in"}
Conclusion
==========
Experimental requirements concerning radioactive decay simulation have been analyzed and evaluated against radioactive decay models and functionality available in present Monte-Carlo codes. It was found that none of the available codes offers the possibility to correctly simulate physics on the per-decay level and to correctly simulate the statistical outcome of many decays without unnecessary overhead within one framework.
A software package, which addresses these requirements and allows simulations using both approaches, has been designed, implemented and verified with respect to the established reference of the ENSDF evaluated data library. The RDM-extended package described in this paper reproduces the functionality of the original Geant4 RDM package implementing per-decay sampling, although with improved software design and generally faster computational performance. In addition, it encompasses functionality for statistical sampling of radioactive decays: with respect to the per-decay approach of the pre-existing Geant4 RDM, this approach has been verified to achieve better consistency with ENSDF data, and better computational performance. Significant consistency improvements have been verified especially in the X-ray regime.
The RDM-extended package can be used transparently in Geant4-based simulation applications. Its experimental validation is documented in a distinct dedicated paper [@RadDecay2012_2].
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors would like to acknowledge the financial support from the Deutsche Zentrum fuer Luft– und Raumfahrt (DLR) under Grant number 50QR902 and 50Q1102.
[^1]: Manuscript submitted January 28, 2012. This work has been supported by Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR) under grants 50 QR 0902 and 50 QR 1102.
[^2]: S. Hauf and M. Kuster are with European XFEL GmbH, Hamburg, Germany (e-mail: steffen.hauf@xfel.eu)
[^3]: D.H.H. Hoffmann, P.––M. Lang and S. Neff are with Institute for Nuclear Sciences, TU Darmstadt, Darmstadt, Germany
[^4]: M.G. Pia and M. Batič are with the INFN Genova, Genova, Italy
[^5]: G. Weidenspointner is with the Max-Planck Halbleiter Labor, Munich, Germany and the Max-Planck Institut f[ü]{}r extraterrestrische Physik, Garching, Germany
[^6]: A. Zoglauer is with the Space Science Laboratory, University of California, Berkeley, CA, USA
[^7]: Z.W. Bell is with the Oak Ridge National Laboratory, Oak Ridge, TN, USA
|
---
abstract: 'In this work, we investigate the dynamics of a non-local model describing spontaneous cell polarisation. It consists in a drift-diffusion equation set in the half-space, with the coupling involving the trace value on the boundary. We characterize the following behaviours in the one-dimensional case: solutions are global if the mass is below the critical mass and they blow-up in finite time above the critical mass. The higher-dimensional case is also discussed. The results are reminiscent of the classical Keller-Segel system in double the dimension. In addition, in the one-dimensional case we prove quantitative convergence results using relative entropy techniques. This work is complemented with a more realistic model that takes into account dynamical exchange of molecular content at the boundary. In the one-dimensional case we prove that blow-up is prevented. Furthermore, density converges towards a non trivial stationary configuration.'
author:
- 'Vincent Calvez[^1]'
- 'Rhoda J. Hawkins[^2]'
- 'Nicolas Meunier[^3]'
- 'Raphael Voituriez. [^4]'
title: Analysis of a non local model for spontaneous cell polarisation
---
Introduction
============
Cell polarisation refers generically to a process that enables a cell to switch from a spherically symmetric shape to a state with a prefered axis. Such a phenomenon is an essential step for many biological processes and is involved for instance in cell migration, division, or morphogenesis. While the precise biochemical basis of polarisation can vary greatly, in its early stages polarisation is always characterised by an inhomogeneous distribution of specific molecular markers. Cell polarisation can be driven by an external asymmetric signal as in the example of chemotaxis, where a chemical gradient imposes the direction of migration of cells [@alberts]. Another example is given by mating yeast, for which the external signal is a pheromone gradient, which causes the cell to grow an elongation known as a shmoo in the direction of the pheromone source [@alberts]. However observations show that some cellular systems, such as mating yeast, can also polarise spontaneously in absence of external gradients [@Wedlich-Soldner2003]. These two distinct polarisation processes, [*driven*]{} or [*spontaneous*]{}, are necessary for cells to fulfil different biological functions. However, so far the conditions under which a cell can polarise spontaneously or only in response to an external asymmetric forcing are not well understood.
The molecular basis of cell polarisation has been much discussed in the biological literature over the past decade, and is likely to involve several processes. It is now widely recognised that the cell cytoskeleton plays a crucial role in cell polarisation. The cell cytoskeleton is a network of long semiflexible filaments made up of protein subunits (mainly actin or microtubules). These filaments act as roads along which motor proteins are able to perform a biased ballistic motion and carry various molecules, in a process which consumes the chemical energy of adenosine triphosphate ATP. It is observed that the efficiency of formation of polar caps in yeast, indicating polarisation, is reduced when actin transport is disrupted, and that the polar caps formed are unstable [@Wedlich-Soldner2003; @Wedlich-Soldner2004; @Irazoqui2005]. In the case of neurons, it has been shown that the polarisation of the growth cone is suppressed when microtubules are depolymerised [@Bouzigues2007]. To account for these observations, it is generally argued that the cytoskeleton filaments mediate an effective positive feedback in the dynamics of polarisation markers [@Wedlich-Soldner2003]. This arises from the molecular markers not only diffusing in the cell cytoplasm, but also being actively transported by molecular motors along cytoskeleton filaments, the dynamic organisation of which is regulated by the markers themselves.
From the physical point of view, achieving an inhomogeneous distribution of diffusing molecules without an external asymmetric field as in the case of spontaneous polarisation requires either an interaction between the molecules or a driving force that maintains the system out of equilibrium. In the case of the cell cytoskeleton, it is well known that the hydrolysis of ATP acts as a sustained energy input which drives the system out of equilibrium, and one can therefore hypothesizes on general grounds that spontaneous polarisation in cells stems from non equilibrium processes. Cell polarisation has been the subject of a few theoretical studies in recent years. Many models rely on reaction-diffusion systems in which polarisation emerges as a type of Turing instability [@Iglesias2008; @Levine2006; @Onsum2007] and some (e.g. [@Onsum2007; @Wedlich-Soldner2003]) include cytoskeleton proteins as a regulatory factor. However, the full dynamics of markers is generally not considered.
In this article, following the work of [@HBPV], we study a class of models for spontaneous cell polarisation. These models couple the evolution of molecular markers with the dynamics of the cytoskeleton. Namely the markers are assumed to diffuse in the cytoplasm and to be actively transported along the cytoskeleton. The density of molecular markers is denoted by $n(t,x)$. The advection field is denoted by ${\mathbf{u}}(t,x)$. This field is obtained through a coupling with the boundary value of $n(t,x)$.
The cell is figured by the half-space ${\mathcal{H}}= {\mathbb{R}}^{N-1}\times (0,+\infty)$. We denote the space variable $x = (y,z)$. The time evolution of the molecular markers follows an advection-diffusion equation: $$\label{eq:2D model}
\partial_t n(t,x) = \Delta n(t,x) - \nabla\cdot\left( n(t,x) {\bf u}(t,x)\right) \, , \quad \, t>0\, , \quad x\in {\mathcal{H}}\, .$$
The one-dimensional case
------------------------
We first analyse two different models set on the half-line $(0,+\infty)$. In the simplified version, the advection field is given by ${\mathbf{u}}(t,z) = - n(t,0)$. Active transport arises at uniform speed, the speed being given by the value of the density at $z = 0$.
### The simplified model
The model writes as follows. $$\label{eq1D}
\partial _t n(t,z) = \partial _{zz} n(t,z) +n(t,0) \partial _z n(t,z)\, , \quad t >0\, , \, z\in (0,+\infty)\, ,$$ together with the zero-flux boundary condition at $z = 0$: $$\label{cl1D}
\partial _z n (t,0)+n(t,0)^2=0\, .$$ We have formally conservation of molecular content: $$M =\int_{z>0} n_0(z){{\, \rm d}}z = \int_{z>0} n(t,z){{\, \rm d}}z \, .$$
Solutions of (\[eq1D\]) may become unbounded in finite time (so-called blow-up). This occurs if the mass $M$ is above the critical mass: $M>1$. In the case $M< 1$, the solution converges to 0. In the critical case $M = 1$ there exists a family of stationary states parametrized by the first moment. The solution converges to the stationary state corresponding to the first moment of the initial condition $\int_{z>0} z n_0(z){{\, \rm d}}z$.
\[th:1D\] Assume that the initial data $n_0$ satisfies both $n_0 \in L^1(( 1 + z){{\, \rm d}}z)$ and $\int_{z>0} n_0(z) (\log n_0(z))_+ {{\, \rm d}}z< + \infty$. Assume in addition that $M\leq 1$, then there exists a global weak solution (in the sense of Definition \[def:weak\]) that satisfies the following estimates for all $T>0$, $$\begin{aligned}
\sup_{t\in (0,T)} \int_{z>0} n(t,z) (\log n(t,z))_+ {{\, \rm d}}z &<& +\infty\, ,\\
\int_0^T\int_{z>0}n(t,z) \left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}t &<& +\infty\, . \end{aligned}$$ In the sub-critical case $M<1$ the solution strongly converges in $L^1$ towards the self-similar profile $G$ given by (\[eq:stat state rescaled\]) in the following sense: $$\lim_{t\to +\infty }\left\|n(t,z) - \frac{1}{\sqrt{1+ 2t}} G\left(\frac{ z}{\sqrt{1+ 2t}}\right) \right\|_{L^1} = 0\, .$$ In the critical case $M = 1$, assuming in addition that the second moment is finite $\int_{z>0} z^2n_0(z){{\, \rm d}}z<+\infty$, the solution strongly converges in $L^1$ towards a stationary state $\alpha \exp(-\alpha z)$, where $\alpha^{-1} = \int_{z>0} z n_0(z){{\, \rm d}}z$.\
\[th:1D BU\] Assume $M>1$. Any weak solution with non-increasing initial data $n_0$ blows-up in finite time.\
In the present biological context, blow-up of solutions is interpreted as polarisation of the cell. Indeed there is a strong instability driving the system towards an inhomogeneous state.
In Section \[secvariants\], we present analogous blow-up results in the case of a finite interval $z \in (0,L)$ or finite range of action.
Such a critical mass phenomenon (global existence [*versus*]{} blow-up) has been widely studied for the Keller-Segel system (also known as the Smoluchowski-Poisson system) in two dimensions of space [@BDP; @P]. The equation (\[eq1D\]) represents in some sense a caricatural version of the classical Keller-Segel system in the half-line $(0,+\infty)$. Note that there exist other ways to mimick the two dimensional case in one dimension [@CPS; @CieslakLaurencot].
There is a strong connection between the equation under interest here (\[eq1D\]) and the one-dimensional Stefan problem. The later writes indeed [@HV1]: $$\left\{\begin{array}{l}
\partial_t u(t,z) = \partial_{zz} u(t,z) \, , \quad \, t>0\, , \, z\in (-\infty,s(t))\, , \\
\lim_{z\to -\infty}\partial_zu (t,z) = 0 \, , \quad u(t,s(t)) = 0\, , \quad \partial_z u (t,s(t)) = -s'(t)\, .
\end{array}\right.$$ The temperature is initially non-negative: $u(0,z) = u_0(z)\geq 0$. By performing the following change of variables: $\phi(t,z) = - u(t,s(t)-z)$, we get an equation that is linked to (\[eq:1D\]) by $n(t,z) = \partial_z \phi(t,z)$. This connection provides some insights concerning the possible continuation of solutions after blow-up [@HV1]. This question has raised a lot of interest in the past recent years [@HV2; @V1; @V2; @DS]. It is postulated in [@HV1] that the one-dimensional Stefan problem is generically non continuable after the blow-up time.
### The model with dynamical exchange of markers at the boundary
The boundary condition (\[cl1D\]) turns out to be unrealistic from a biophysical viewpoint. This claim is emphasized by the possible occurence of blow-up in finite time. On the way towards a more realistic model, we distinguish between cytoplasmic content $n(t,z)$ and the concentration of trapped molecule on the boundary at $z=0$: $\mu(t)$. Then the exchange of molecules at the boundary is described by very simple kinetics: $$\frac {{{\, \rm d}}}{{{\, \rm d}}t} \mu(t)= n(t,0)- \gamma \mu(t)\, .$$ The transport speed is modified accordingly: ${\mathbf{u}}(t,z) = - \mu(t)$. The model writes: $$\left\{\begin{array}{l}
\partial _t n (t,z)=\partial _{zz} n (t,z) +\mu (t) \partial _z n (t,z) \, , \quad t >0\, , \, z\in (0,+\infty) \medskip\\
\partial _z n (t,0)+\mu(t) n(t,0) = \frac d{dt} \mu(t) \, .
\end{array}\right.$$
The flux condition on the boundary ensures the conservation of molecular content. Denoting $m(t) = \int_{z>0} n(t,z){{\, \rm d}}z$ the partial mass of cytoplasmic markers, we have: $$M = \mu_0 + m_0 = \mu(t) + m(t) \, .$$
Since the transport speed is bounded, $\mu(t)\leq M$, we clearly have global existence of solutions for any mass $M>0$. We can precise the asymptotic behaviour in the super-critical case $M>1$. This is the purpose of the following Theorem.\
\[th:long time critical1\] Assume that the initial data $n_0$ satisfies both $n_0 \in L^1(( 1 + z){{\, \rm d}}z)$ and $\int_{z>0} n_0(z) (\log n_0(z))_+ {{\, \rm d}}z< + \infty$. Assume the mass is super-critical $M> 1$. The partial mass $m(t)$ converges to 1 and the density $n(t,z)$ strongly converges in $L^1$ towards the exponential profile $(M-1) e^{-(M-1)z}$.
The higher-dimensional case
---------------------------
In the higher dimensional case $N\geq 2$ we only partially analyse simplified models such as where the transport speed is directly computed from the trace value $n(t,y,0)$. Equation (\[eq:2D model\]) is complemented with the zero-flux boundary condition: $$\label{cl:ajout:nico}
\partial_z n (t,y,0) - n(t,y,0){\mathbf{u}}(t,y,0)\cdot {\bf e}_z =0\,, \quad y \in {\mathbb{R}}^{N-1}\, .$$ We have formally conservation of the molecular content: $$M =\int_{{\mathcal{H}}} n_0(x){{\, \rm d}}x = \int_{{\mathcal{H}}} n(t,x){{\, \rm d}}x\, .$$
Following [@HBPV] we make the distinction between two possible choices for the advection speed ${\bf u}$. In the [**transversal case**]{}, the field ${\mathbf{u}}$ is normal to the boundary: $$\label{eq:u1}
{\mathbf{u}}(t,y,z) = - n(t,y,0) {\bf e}_z\, .$$ This corresponds to a particular orientation of the cytoskeleton, modelling the microtubules. Indeed microtubules are very rigid filaments whose bending length is larger than the typical size of yeast cells.
In the [**potential case**]{}, the field ${\mathbf{u}}$ derives from a harmonic potential. The source term of the potential is located on the boundary: $$\label{eq:u2}
{\mathbf{u}}(t,x) = \nabla c(t,x) \, , \quad \mbox{where}\quad \left\{\begin{array}{rl} -\Delta c(t,x) &= 0\, ,\medskip \\ - \partial_z c(t,y,0) &= n(t,y,0)\, . \end{array}\right.$$ This corresponds to another orientation of the cytoskeleton, modelling the actin network. Indeed the actin networks is a diffusive network where orientations are mixed up. In dimension $N=1$, observe that the two choices (\[eq:u1\]) and (\[eq:u2\]) coincide.
In dimension $N\geq 2$, we state global existence for small initial data. The criteria are identical for the two possible choices of the advection field (\[eq:u1\]) or (\[eq:u2\]). This is a consequence of the two common features: both fields are divergence free and possess the same normal component at the boundary.\
\[thdim2\] Assume that the advection field satisfies the two following conditions: $\nabla\cdot {\mathbf{u}}\geq 0$ and ${\mathbf{u}}(t,y,0)\cdot {\mathbf{e}}_z = n(t,y,0)$. Assume that the initial data $n_0$ satisfies both $n_0 \in L^1(( 1 + |x|^2){{\, \rm d}}x)$ and $\|n_0\|_{L^N}$ is smaller than some constant $c_N$ depending only on the dimension $N$. Then there exists a global weak solution to (\[eq:2D model\]) and (\[cl:ajout:nico\]).\
Notice that both conditions $\nabla\cdot {\mathbf{u}}\geq 0$ and ${\mathbf{u}}(t,y,0)\cdot {\mathbf{e}}_z = n(t,y,0)$ are fulfilled in (\[eq:u1\]) and (\[eq:u2\]).\
\[th2dim2\] Assume that $n(t,x)$ is a strong solution to (\[eq:2D model\]) which verifies:
- $\partial_z n(t,x) \leq 0$ for all $x\in {\mathcal{H}}$ and $t>0$ when the advective field is given by (\[eq:u1\]),
- $\partial_z n(t,x) \leq 0$ and for all $x\in {\mathcal{H}}$ and $t>0$, the matrix $A(t,x) = x \, \otimes \, \partial_z \nabla_y \log n(t,x)$ satisfies $A^T + A \geq 0$ (in the matrix sense) when the advective field is given by (\[eq:u2\]).
Assume in addition that the second momentum is initially small enough: there exists a constant $C_N$ depending only on the dimension such that $\int_{x\in {\mathcal{H}}} |x|^2 n_0(x){{\, \rm d}}x \leq C_N M^{\frac{N+1}{N-1}} $. Then the maximal time of existence of the solution is finite.\
#### Open questions
We end this introductory Section with some open questions that we are not able to resolve. (i) Obtain a rate for the convergence in relative entropy in Theorem \[th:1D\] for the cases $M = 1$ and $M<1$. (ii) Prove blow-up for the systems (\[eq:2D model\])–(\[eq:u2\]) with large initial data without any monotonicity assumption on the density $n(t,x)$.
The outline of the paper is as follows. In Section \[secdim1\], we analyse with full details the one-dimensional case. In section \[secvariants\] we study some variants of blow-up criteria in the one-dimensional case. In Section \[sec:ODE/PDE\], we study a model with flux of markers at the boundary in the one-dimensional case. In Section \[secdimsup\], we analyse the higher dimensional case.
Results in the one-dimensional case have been announced in the note [@CalvezMeunier].
The boundary Keller-Segel (BKS) equation in dimension $N = 1$ {#secdim1}
=============================================================
In this Section we study the following equation, $$\label{eq:1D}
\left\{\begin{array}{l}
\partial _t n(t,z) = \partial _{zz} n(t,z) +n(t,0) \partial _z n(t,z)\, , \quad t >0\, , \, z\in (0,+\infty)\, ,\medskip\\
\partial _z n (t,0)+n(t,0)^2=0\, ,
\end{array}\right.$$ and we prove Theorems \[th:1D\] and \[th:1D BU\]. More precisely, in Sections \[sec:M<1\] we prove the existence of a global weak solution for $M\le 1$. Then in Section \[sec:BU\] we prove the blow up character in the case $M>1$.
We begin with a proper definition of weak solutions, adapted to our context.
\[def:weak\] We say that $n(t,z)$ is a weak solution of (\[eq:1D\]) on $(0,T)$ if it satsifies: $$n\in L^\infty(0,T;L^1_+({\mathbb{R}}_+))\, , \quad \partial_z n \in L^1((0,T)\times {\mathbb{R}}_+) \, , \label{eq:flux L1}$$ and $n(t,z)$ is a solution of (\[eq:1D\]) in the sense of distributions in $\mathcal D'({\mathbb{R}}_+)$
Since the flux $(\partial_z n(t,z) + n(t,0) n(t,z))$ belongs to $ L^1((0,T)\times {\mathbb{R}}_+)$, the solution is well-defined in the distributional sense under assumption (\[eq:flux L1\]). In fact we can write $\int_0^T n(t,0) {{\, \rm d}}t = - \int_0^T\int_{z>0} \partial_z n(t,z){{\, \rm d}}z{{\, \rm d}}t$.
Weak solutions in the sense of Definition \[def:weak\] are mass-preserving: $$M =\int_{z>0} n_0(z){{\, \rm d}}z = \int_{z>0} n(t,z){{\, \rm d}}z\, .$$ The proof closely follows the arguments of the next Lemma which is concerned with moment growth.\
\[Moment growth\] Assume $n(t,z)$ is a weak solution of (\[eq:1D\]). Assume in addition that $z n_0\in L^1({\mathbb{R}}_+)$. Then the following identity holds true: $$\label{faible11}
\int _{z>0} z n(T,z) {{\, \rm d}}z = \int _{z>0} z n_0(z) {{\, \rm d}}z + \int _0^T \left(1-\int_{z>0} n(t,z) {{\, \rm d}}z\right)n(t,0) {{\, \rm d}}t\, .$$
Consider the approximation function $\chi(z)$ which verifies $\chi(z)=1$ if $0\le z\le 1$, $\chi(z)=0$ if $z\ge 2$, which is smooth and non-negative everywhere. Define the family of functions $(\varphi_\varepsilon )_\varepsilon$ by $\varphi_\varepsilon(z) = z \chi({\varepsilon}z )$. We recall the weak formulation: $$\begin{aligned}
\int _{z>0} n(T,z)\varphi_\varepsilon(z){{\, \rm d}}z &=& \int _{z>0} n_0(z)\varphi_\varepsilon(z){{\, \rm d}}z \\
& & - \int_0^T \int _{z>0} \left( \partial _{z} n(t,z) +n(t,0) n(t,z)\right) \varphi_\varepsilon'(z){{\, \rm d}}z {{\, \rm d}}t\, . \end{aligned}$$ The function $\varphi_{\varepsilon}(z)$ converges monotically to $z$ as ${\varepsilon}\to 0$, hence from the monotone convergence theorem, we deduce that $z n(T,z)\in L^1$.
The function $\varphi_{\varepsilon}'(z) = \chi({\varepsilon}z) + {\varepsilon}z \chi'({\varepsilon}z)$ is bounded in $L^\infty$ uniformly in ${\varepsilon}$ and it converges to 1 $a.e$. Since $n(\cdot, 0) n \in L^1((0,T)\times R _+)$ and $\partial_z n \in L^1((0,T)\times {\mathbb{R}}_+)$, from Lebesgue’s dominated convergence theorem, it follows that $$\begin{aligned}
\lim _{\varepsilon \to 0} \int_0 ^T \int _{z>0} \varphi'_{\varepsilon} (z) n(t,0)n(t,z){{\, \rm d}}z {{\, \rm d}}t & =& \int _0^T \int_{z>0} n(t,0) n(t,z) {{\, \rm d}}z{{\, \rm d}}t \,, \\
\lim _{\varepsilon \to 0} \int_0 ^T \int _{z>0} \varphi '_\varepsilon \left(z\right) \partial _z n(t,z) {{\, \rm d}}z {{\, \rm d}}t & =& \int_0 ^T \int _{z>0} \partial _z n(t,z) {{\, \rm d}}z{{\, \rm d}}t = - \int_0 ^T n(t,0) {{\, \rm d}}t\, . \end{aligned}$$
Global existence for sub-critical mass $M< 1$ {#sec:M<1}
---------------------------------------------
### A priori estimates
Our next result is concerned with the derivation of a priori bounds for solutions to (\[eq:1D\]) in the classical sense.\
\[apriori\] Let $n$ be a classical solution to (\[eq:1D\]). If $M<1$, then the following estimate holds true for some $\delta>0$ and for all $t\in (0,T)$: $$\begin{aligned}
\int _{z>0}n(t,z) (\log n (t,z))_+ {{\, \rm d}}z + \delta \int _{0}^t\int _{z>0} n(s,z)(\partial_z \log n (s,z))^2 {{\, \rm d}}z{{\, \rm d}}s \nonumber \\
\leq \int_{z>0} n_0(z)\left(\log n_0(z)\right) _+ {{\, \rm d}}z + \int_{z>0} z n_0(z) {{\, \rm d}}z + C(T)\, . \label{eq:main estimate} \end{aligned}$$
We first derive the following trace-type inequality. $$\begin{aligned}
n(t,0)^2 &=& \left( \int_{z>0} \partial_z n(t,z) {{\, \rm d}}z \right)^2 \nonumber \\
\label{ineq:trace:0}
& \le &\left(\int_{z>0}n(t,z) {{\, \rm d}}z \right) \left(\int_{z>0}n(t,z) \left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z \right) \, .\end{aligned}$$ We compute the evolution of the entropy $$\begin{aligned}
\frac{ {{\, \rm d}}}{ {{\, \rm d}}t} \int_{z>0} n(t,z)\log n(t,z) {{\, \rm d}}z & = &\int _{z>0} \partial _t n(t,z) \log n(t,z) {{\, \rm d}}z \nonumber \\
& = &- \int _{z>0} \left(\partial_z n (t,z) + n(t,0) n(t,z)\right) \frac{\partial_z n(t,z) }{n(t,z)} {{\, \rm d}}z \nonumber
\\&
=& - \int _{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z + n(t,0) ^2 \, .
\label{eq:entropy dissipation}\end{aligned}$$ The two contributions are competing. We estimate the balance using inequality (\[ineq:trace:0\]). $$ \frac{ {{\, \rm d}}}{ {{\, \rm d}}t} \int_{z>0} n(t,z)\log n(t,z) {{\, \rm d}}z \leq (M-1) \int _{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z \, .$$ On the contrary to the classical two-dimensional Keller-Segel equation, the dissipation of entropy gives directly the sharp criterion on the mass. There is no need to seek a free energy as in [@BDP] (and references therein). To control the negative part of the entropy, we use the following Lemma adapted from [@BDP; @Calvez.Corrias.Ebde].\
\[BDP\] For any $f\in L^1_+({\mathbb{R}}_+, (1+z){{\, \rm d}}z)$, if $\int f\log f <+\infty $, then $ f\log f $ is in $ L^1({\mathbb{R}}_+)$ and for all $\alpha >0$, the following inequality holds true: $$\label{eq:carleman}
\int_{z>0} f(z)(\log f(z))_+ {{\, \rm d}}z \leq \int_{z>0} f(z) \left( \log f(z)+ \alpha z \right) {{\, \rm d}}z+\frac{1}{\alpha e}
\, .$$
Let $\overline {f} =f \mathds{1}_{f\le 1}$ and $m =\int_{z>0} \overline{f}(z){{\, \rm d}}z$. We build up the relative entropy between $\overline f$ and $\alpha e^{-\alpha z}$. $$\int_{z>0} \overline {f}(z)\left(\log \overline {f}(z) +\alpha z\right) {{\, \rm d}}z = \int_{z>0} \frac{\overline {f}(z)}{\alpha e^{-\alpha z}} \log \left(\frac{\overline {f}(z)}{\alpha e^{-\alpha z}}\right)\alpha e^{-\alpha z} {{\, \rm d}}z+m \log \alpha \, .$$ Using Jensen’s inequality, we deduce that $$\begin{aligned}
& &\int_{z>0} \frac{\overline {f}(z)}{\alpha e^{-\alpha z}} \log \left(\frac{\overline {f}(z)}{\alpha e^{-\alpha z}}\right)\alpha e^{-\alpha z} {{\, \rm d}}z\\
& &\qquad \qquad \qquad \ge \left(\int_{z>0} \frac{\overline {f}(z)}{\alpha e^{-\alpha z}}\alpha e^{-\alpha z} {{\, \rm d}}z \right)\log \left(\int_{z>0} \frac{\overline {f}(z)}{\alpha e^{-\alpha z}}\alpha e^{-\alpha z} {{\, \rm d}}z\right)\\
& & \qquad \qquad \qquad = m \log m \, .\end{aligned}$$ Therefore, $$\int_{z>0} \overline {f}(z)\log \overline {f}(z){{\, \rm d}}z +\alpha \int_{z>0} z \overline {f}(z) {{\, \rm d}}z \ge m \log \left( \alpha m \right) \ge -\frac{1}{\alpha e } \, .$$ Using $$\int_{z>0}f(z)(\log f(z))_+ {{\, \rm d}}z = \int_{z>0} f(z) \log f(z) {{\, \rm d}}z- \int_{z>0} \overline {f}(z) \log \overline {f}(z) {{\, \rm d}}z \, ,$$ this completes the proof of Lemma \[BDP\].
Let us now estimate the first moment. Recalling (\[faible11\]), we deduce that $$\begin{aligned}
\int_{z>0} z n(t,z) {{\, \rm d}}z &\le &\int_{z>0} z n_0(z) {{\, \rm d}}z+ \int _0^t n(s,0) {{\, \rm d}}s \nonumber \\
&\le &
\int_{z>0} z n_0(z) {{\, \rm d}}z+\frac{T}{4\delta'} + \delta' \int _0^t n(s,0)^2 {{\, \rm d}}s \, , \nonumber \\
&\le & \int_{z>0} z n_0(z) {{\, \rm d}}z+\frac{T}{4\delta'} \nonumber \\
& &\qquad \qquad+ \delta' \int _0^t \int_{z>0}n(s,z) \left( \partial_z \log n(s,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}s \, .
\label{ineq:hyper}\end{aligned}$$ Combining (\[eq:entropy dissipation\]), (\[eq:carleman\]) and (\[ineq:hyper\]) with $\alpha = 1$ we obtain that $$\begin{aligned}
\int_{z>0} n(t,z)\left(\log n(t,z)\right) _+ {{\, \rm d}}z + (1- M - \delta')\int_0^t\int_{z>0}n(s,z) \left( \partial_z \log n(s,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}s \\
\leq \int_{z>0} n_0(z)\log n_0(z) {{\, \rm d}}z + \int_{z>0} z n_0(z) {{\, \rm d}}z + \frac{1}{ e} + \frac{ T}{4\delta'} \, .\end{aligned}$$ Since $M<1$ we can choose $\delta'>0$ such that (\[eq:main estimate\]) holds.
### Regularization procedure
To prove existence of weak solutions in the sense of Definition \[def:weak\] we perform a classical regularization procedure. We carefully choose our function spaces in order to end up with minimal assumptions on the initial data. We introduce $$a^{\varepsilon}(t) = \int _{z>0} \phi_{\varepsilon}(z) n^{\varepsilon}(t,z) {{\, \rm d}}z\,,$$ where $\phi_{\varepsilon}$ is an approximation to the identity. We have formally $a^{\varepsilon}(t) \to n (t,0)$ as ${\varepsilon}\to 0$.
We consider the following regularized problem $$\label{eq:1D:reg}
\left\{\begin{array}{l}\partial _t n^{\varepsilon}(t,z) = \partial _{zz} n^{\varepsilon}(t,z) +a^{\varepsilon}(t) \partial _z n^{\varepsilon}(t,z)\, , \medskip\\
\partial _{z} n^{\varepsilon}(t,0) +a^{\varepsilon}(t) n^{\varepsilon}(t,0)=0\,.
\end{array}\right.$$ Our aim is to extend the main [*a priori*]{} estimate (\[eq:main estimate\]) to the regularized problem (\[eq:1D:reg\]). We check that $$a^{\varepsilon}(t) = - \int _{z>0} \phi_{\varepsilon}(z) \int _{y=z}^{+\infty}\partial_z n^{\varepsilon}(t,y){{\, \rm d}}y {{\, \rm d}}z \leq \int _{z>0} |\partial_z n^{\varepsilon}(t,z)|{{\, \rm d}}z \, .$$ Thus the following inequality replaces (\[ineq:trace:0\]): $$a^{\varepsilon}(t) n^{\varepsilon}(t,0) \le M \left(\int_{z>0}n^{\varepsilon}(t,z) \left( \partial_z \log n^{\varepsilon}(t,z) \right)^2 {{\, \rm d}}z \right) \, .$$ On the other hand the moment growth estimate only relies on the diffusion contribution. We have accordingly, $$\begin{aligned}
\label{eq:moment eps}
\int_{z>0} z n^{\varepsilon}(t,z) {{\, \rm d}}z &\leq & \int_{z>0} z n_0(z) {{\, \rm d}}z+\frac{T}{4\delta '} \\
& & + \delta ' \int _0^t \int_{z>0}n^{\varepsilon}(s,z) \left( \partial_z \log n^{\varepsilon}(s,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}s \, .\nonumber \end{aligned}$$ It is then straightforward to justify (\[eq:main estimate\]) for the regularized solution $n^{\varepsilon}$ in the line of Proposition \[apriori\]. There exists $\delta >0$ such that $$\begin{aligned}
\int _{z>0}n^{\varepsilon}(t,z) (\log n^{\varepsilon}(t,z))_+ \, {{\, \rm d}}z + \delta \int _{0}^t\int _{z>0} n^{\varepsilon}(s,z)(\partial_z \log n^{\varepsilon}(s,z))^2 \, {{\, \rm d}}s{{\, \rm d}}z \nonumber \\
\leq \int_{z>0} n_0(z)\left(\log n_0(z)\right) _+ {{\, \rm d}}z + \int_{z>0} z n_0(z) {{\, \rm d}}z + C(T)\, . \label{eq:main estimate eps} \end{aligned}$$
### Time compactness {#sec:aubin}
Passing to the limit as ${\varepsilon}\to 0$, the main difficulty lies in the nonlinear term $a^\varepsilon(t)\partial _z n^\varepsilon(t,z)$. We need some compactness to proceed further. It is provided by the Aubin-Simon Lemma, see [@Aubin1963; @Lions1969; @Simon].\
\[AubinLions\] Let $X\subset B \subset Y$ be Banach spaces such that the embedding $X\subset B$ is compact. Assume that the set of functions $\mathcal F$ satisfies: $\mathcal F$ is bounded in $L^2(0,T;X)$ and $\partial_t f$ is uniformly bounded in $L^2(0,T;Y)$. Then $\mathcal F$ is relatively compact in $L^2(0,T;B)$.\
The natural choice for spaces in our context would be $\widetilde X = W^{1,1}({\mathbb{R}}_+) $ and $B = \mathcal{C}^{0}({\mathbb{R}}_+)$ (up to the decay problem at infinity). However, due to the possible apparition of jumps, the embedding $\tilde X \subset B$ is not compact. Using the entropy estimate (\[eq:main estimate eps\]) we are able to modify the space $\widetilde X$ in order to make the embedding $X\subset B$ compact. The crucial point is to obtain an equi-continuity condition weaker than any Hölder condition, in the spirit of [@Adams Theorem 8.36].
\[lem:I(N)\] Assume $\mathcal F$ is a set of non-negative bounded functions in the following sense: there exists a constant $A >0$ such that for all $f\in \mathcal F$ $$\sup_{t\in (0,T)} \int_{z>0} f(t,z) (\log f(t,z))_+{{\, \rm d}}z \leq A \, , \quad \int_0^T \int_{z>0}f(t,z) \left( \partial_z \log f(t,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}t \leq A\, .$$ Then there exists a continuous function $\eta:{\mathbb{R}}_+\to {\mathbb{R}}_+$ and a constant $A'$ depending on $A$ such that $\eta(0) = 0$ and for all function $f\in \mathcal F$ we have $$\int_0^T \left( \sup_{x\neq y} \frac{|f (t,y) - f(t,x)|}{\eta(y - x)}\right)^2{{\, \rm d}}t \leq A'\, . \label{eq:equicont}$$
First for $x<y$ we have that $$\begin{aligned}
|f(t,y) - f(t,x)|^2 & \leq & \left(\int_x^y |\partial_z f (t,z)|{{\, \rm d}}z\right)^2 \nonumber \\
&\leq & \left( \int_x^y f(t,z){{\, \rm d}}z\right) \left(\int_{z>0}f(t,z) \left( \partial_z \log f(t,z) \right)^2 {{\, \rm d}}z \right) \,. \label{eq:embedding}\end{aligned}$$
We use the Jensen’s inequality for $x<y$: $$\begin{aligned}
\left( \frac{1}{y-x}\int_x^y f(t,z){{\, \rm d}}z \right) \log\left(\frac{1}{y-x}\int_x^y f(t,z){{\, \rm d}}z \right)_+
&\leq & \frac{1}{y-x}\int_x^y f(t,z) (\log f(t,z))_+{{\, \rm d}}z \\
& \leq &\frac{ A}{y - x}\, . \end{aligned}$$ We can invert this inequality to get: $$\frac{1}{y-x}\int_x^y f(t,z){{\, \rm d}}z \leq \Phi\left ( \frac{ A}{y - x} \right)\, , \label{eq:subholder}$$ where $\Phi:[0,+\infty) \to [1,+\infty)$ is the reciprocal bijection of $x (\log x)_+$. We define $\eta(z) = z \Phi(z^{-1}A)$. Clearly $\eta(z)\to 0$ as $z\to 0$ since $\Phi$ is sublinear. Combining (\[eq:embedding\]) and (\[eq:subholder\]), we deduce the estimate (\[eq:equicont\]).
We denote $\mathcal C^{0,\eta}$ the space of functions having modulus of continuity controlled by $\eta$: $$\mathcal C^{0,\eta} = \left\{ g \in \mathcal C^0\, : \, \sup_{x\neq y} \frac{|g (y) - g(x)|}{\eta(y - x)} < +\infty \right\}\, .$$ The injection $\mathcal C^{0,\eta} \subset \mathcal C^0$ is compact on bounded intervals [@Adams]. The behaviour of functions outside bounded intervals in our context is controlled by the following estimate which is a consequence of (\[eq:embedding\]) as $y\to +\infty$: $$|f(t,x)|^2 \leq \frac{ 1}{x} \left( \int_{z>0} z f(t,z){{\, \rm d}}z\right) \left(\int_{z>0}f(t,z) \left( \partial_z \log f(t,z) \right)^2 {{\, \rm d}}z \right) \, . \label{eq:bound infty}$$
The last requirement in the Aubin-Simon Lemma consists in getting very weak estimate for the time derivative $\partial_t n^{\varepsilon}$. We can write $$\partial_t n^{\varepsilon}(t,z) + \partial_z j^{\varepsilon}(t,z) = 0\, ,$$ where $j^{\varepsilon}(t,z) = \partial_z n^{\varepsilon}(t,z) + a^{\varepsilon}(t)n^{\varepsilon}(t,z) $ is uniformly bounded in $L^2(0,T;L^1({\mathbb{R}}_+))$, due to (\[eq:main estimate eps\]) and the following inequalities: $$\|a^{\varepsilon}\|^2_{L^2(0,T)}\leq \|\partial_z n^{\varepsilon}\|^2_{L^2\left(0,T;L^1({\mathbb{R}}_+)\right)} \leq M \int_0^T \int_{z>0} n^{\varepsilon}(t,z)(\partial_z \log n^{\varepsilon}(t,z))^2 \, {{\, \rm d}}t{{\, \rm d}}z\, .$$ Hence $\partial_t n^{\varepsilon}$ is uniformly bounded in $L^2\left(0,T;(W^{1,\infty}({\mathbb{R}}_+))^\prime\right)$.
We introduce some useful functional spaces, endowed with their corresponding norms: $$\begin{aligned}
X &=& \{ g \in \mathcal{C}^{0,\eta}({\mathbb{R}}_+) : \, z^{1/2}g(z) \in L^\infty({\mathbb{R}}_+) \}\, ,\\
B &=& \mathcal{C}^{0}({\mathbb{R}}_+)\cap L^\infty({\mathbb{R}}_+) \, ,\\
Y &=& \left(W^{1, \infty }({\mathbb{R}}_+)\right) ^\prime\, .\end{aligned}$$ It is straightforward to check that $X$ is compactly embedded in $B$.
Combining the above estimates (\[eq:moment eps\]–\[eq:main estimate eps\]–\[eq:equicont\]–\[eq:bound infty\]) we obtain that $n^{\varepsilon}$ is bounded in $L^2(0, T;X)$ uniformly with respect to ${\varepsilon}$. The Aubin-Simon Lemma ensures that, up to extracting a subsequence, $n^{\varepsilon}$ converges strongly in $L^2(0, T;B)$ towards some $n$. From uniform convergence of $n^{\varepsilon}$, we deduce that $a^{\varepsilon}(t) \to n(t,0)$ strongly in $L^2(0,T)$. Hence we can pass to the limit in the nonlinear term $a^{\varepsilon}(t) n^{\varepsilon}(t,z)$ in the weak formulation.
To conclude we verify that the [*a priori*]{} estimates given in Proposition \[apriori\] are valid after passing to the limit ${\varepsilon}\to 0$. From the strong convergence in $L^2(0,T;B)$ we deduce that, up to extracting a subsequence that we do not relabel, $$\lim_{{\varepsilon}\to 0} \int_{z>0} n^{\varepsilon}(t,z) \left( \log n^{\varepsilon}(t,z)\right) _+ {{\, \rm d}}z = \int_{z>0} n (t,z) \left( \log n (t,z)\right) _+ {{\, \rm d}}z \, , \quad {\rm a.e.}\; t\in (0,T)\, .$$ On the other hand, we use the convex character of the functional (see [@BDP] and the references therein) $\int_{z>0} f(z) \left( \partial _z \log f(z) \right)^2 {{\, \rm d}}z = 4 \int_{z>0} \left( \partial _z \sqrt{ f(z)} \right)^2 {{\, \rm d}}z $. We have finally,$$\liminf_{{\varepsilon}\to 0} \int_0^t \int_{z>0} n^{\varepsilon}(s,z) \left( \partial _z \log n^{\varepsilon}(s,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}s \geq
\int_0^t \int_{z>0} n(s,z) \left( \partial _z \log n(s,z) \right)^2 {{\, \rm d}}z {{\, \rm d}}s \, .$$ So the [*a priori*]{} estimate (\[eq:main estimate\]) is valid a.e. $t\in (0,T)$.
Long-time behaviour for the critical and the subcritical cases {#sec:StSt}
--------------------------------------------------------------
In this Section, we investigate long-time behaviour of solutions in the case $M\leq1$ using entropy methods. We distinguish between the critical case (no need to rescale) and the sub-critical case (self-similar diffusive scaling).
We stress out that the method for proving global existence in the critical case $M=1$ strongly relies on the entropy estimate . This is why we analyse the global existence and the long time behaviour all in all.
### The critical case: global existence and asymptotic convergence {#sec:M=1}
The main inequality we have used so far in order to prove global existence is (\[ineq:trace:0\]). Equality occurs if $\log n(t,z)$ is linear w.r.t. $z$: there exists $\alpha (t)>0$ such that $n(t,z) = M \alpha(t) \exp(-\alpha(t) z)$. In fact the boundary condition (\[eq:1D\]) implies $M = 1$. On the other hand the stationary states to equation (\[eq:1D\]) are precisely the one-parameter family: $$h_\alpha(z)=\alpha \exp\left(-\alpha z\right)\, , \quad \alpha>0 \, .$$ This motivates to introduce the relative entropy: $${\mathbf{H}}(t) =\int_{z>0}\frac{n(t,z)}{h_\alpha (z)} \log \left(\frac{n(t,z)}{h_\alpha (z)}\right)h_\alpha (z){{\, \rm d}}z \\
=\int_{z>0} n(t,z) \log n(t,z) {{\, \rm d}}z + \alpha {\mathbf{J}}(t) - \log \alpha \, .$$
Recalling (\[faible11\]), we notice that the first momentum of density is conserved in the case $M = 1$: ${\mathbf{J}}(t) = {\mathbf{J}}(0)$. This prescribes the value for $\alpha$ provided we can pass to the limit $t\to \infty$: $\alpha^{-1} = {\mathbf{J}}(0)$. We also recall the formal computation giving the time evolution of the relative entropy (\[eq:entropy dissipation\]): $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{H}}(t)&=& - \int_{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z + n(t,0) ^2 \label{eq:dissip ent M=1} \\
&=& - \int_{z>0} n(t,z)\left(\partial_z \log n(t,z) + n(t,0)\right)^2 {{\, \rm d}}z \leq 0\, . \nonumber\end{aligned}$$ The Jensen’s inequality yields ${\mathbf{H}}(t)\geq 0$, so we have $0\leq {\mathbf{H}}(t)\leq {\mathbf{H}}(0)$. We deduce from Lemma \[BDP\] that the quantity $\int_{z>0} n(t,z) (\log n(t,z))_+ {{\, \rm d}}z$ is uniformly bounded by some constant denoted by $C_0$: $$\int_{z>0} n(t,z) (\log n(t,z))_+ {{\, \rm d}}z \leq C_0 \, , \quad {\rm a.e.}\; t\in (0,+\infty)\, .$$
The method of proving convergence in relative entropy towards $h_\alpha$ is as follows. We first gain [*a priori*]{} estimates which enable to pass to the limit after extraction as in Section \[sec:aubin\]. For this we update the estimates in Section \[sec:M<1\] with the key information that the entropy ${\mathbf{H}}$ is uniformly bounded. The identification of the limit requires more information concerning the behaviour of the density at infinity. We use the fact that the first momentum drives the evolution of the second one. Finally we conclude that the entropy converges to 0 along some subsequence. Since it is non-increasing, it converges to 0 globally.
#### A-priori bound
We cannot follow the strategy developped in Section \[sec:M<1\] since we crucially used $M<1$. We need to gain some control on the dissipation (\[eq:dissip ent M=1\]) which is the competition of two opposite contributions that are nearly equally balanced. For that purpose we introduce the function $\Lambda: {\mathbb{R}}_+ \to {\mathbb{R}}_+$ such that $\Lambda(0) = 0$ and $\Lambda ' (u)= \left( \log u\right) ^{1/2}_+$. It is non-decreasing, convex and superlinear. Thus there exists $A\in R$ such that $\Lambda(u)^2\geq 2 C_0 u^2 $ for all $u\geq A$. Adapting (\[ineq:trace:0\]) to our context we get $$\begin{aligned}
\Lambda(n(t,0))^2 &=& \left(-\int_{z>0} \partial _z \left(\Lambda (n(t,z)) \right) {{\, \rm d}}z\right) ^2 \nonumber \\
&=& \left(-\int_{z>0} \Lambda' (n(t,z)) n(t,z) \partial _z (\log n(t,z)) {{\, \rm d}}z\right) ^2 \nonumber \\
&\le & \left(\int_{z>0} n(t,z) |\Lambda' (n(t,z))| ^2 {{\, \rm d}}z\right) \left(\int_{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z \right)\nonumber\\
& \leq &\left( \int_{z>0} n(t,z) (\log n(t,z))_+{{\, \rm d}}z \right) \left(\int_{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z \right) \nonumber \\
& \leq &C_0 \int_{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z \, . \label{eq:trace Lambda:ajout:nico} \end{aligned}$$ From (\[eq:dissip ent M=1\]) and (\[eq:trace Lambda:ajout:nico\]), we deduce that $$\frac{ {{\, \rm d}}}{{{\, \rm d}}t} \int_{z>0} n(t,z) \log n(t,z) {{\, \rm d}}z \leq \left\{\begin{array}{ll} 0 & \mbox{if} \quad n(t,0)\leq A \, ,\\ - \frac{\Lambda(n(t,0))^2}{ C_0} + n(t,0)^2 \leq -n(t,0)^2 & \mbox{if} \quad n(t,0)\geq A \, .\end{array}\right.$$ We introduce the set $E = \{t: n(t,0) \geq A\}$. We have obtained the estimate $$\int_E n(t,0)^2{{\, \rm d}}t \leq \int_{z>0} n_0(z) \log n_0(z) {{\, \rm d}}z\,, \label{eq:L2 n(t,0):ajout:nico}$$ thus $n(t,0)$ cannot be too large (in $L^2$ sense).
We deduce from and that $\int _0 ^t\int_{z>0} n(s,z) \left(\partial _z\log n(s,z)\right)^2 {{\, \rm d}}z {{\, \rm d}}s$ is bounded for all $t \in (0,T)$. The previous statements prove that Proposition \[apriori\] remains valid in the case $M=1$. Next, the existence proof, in the case $M=1$, is similar to the case $M<1$ and we do not repeat it here.
#### Passing to the limit
Let $N$ be any integer. We translate the solution in time: we define $u_N(s,x) = n(N + s,x)$. The function ${\mathbf{H}}(t)$ is non-increasing and bounded below by zero. Therefore the entropy dissipation (\[eq:dissip ent M=1\]) converges to zero in an averaged sense. The estimate $$\int_N^{N+1} \left( \int_{z>0} n(t,z)\left( \partial_z \log n(t,z) \right)^2 {{\, \rm d}}z - n(t,0) ^2 \right){{\, \rm d}}t = {\mathbf{H}}(N)- {\mathbf{H}}(N+1) \xrightarrow[N\to \infty]{} 0\, ,$$ reads $$\int_0^{1} \left( \int_{z>0} u_N(s,z)\left( \partial_z \log u_N(s,z) \right)^2 {{\, \rm d}}z - u_N(s,0) ^2 \right){{\, \rm d}}s \xrightarrow[N\to \infty]{} 0 \, . \label{eq:dissipation u0}$$ We deduce from (\[eq:L2 n(t,0):ajout:nico\]) that $u_N(s,0)$ is bounded in $L^2(0,1)$ uniformly w.r.t. $N$. Hence both terms are bounded in (\[eq:dissipation u0\]). This enables to pass to the limit as in Section \[sec:aubin\]. Up to extracting a subsequence (labelled with $N'$) there exists $u_\infty$ such that $u_{N'}\to u_\infty$ strongly in $L^2(0,1;B)$: $$\int_0^1 \|u_{N'}(s) - u_\infty(s)\|_{B}^2{{\, \rm d}}s \to 0\, . \label{eq:limit}$$ We can pass to the limit in each term of the averaged dissipation: $$\begin{aligned}
0 = \liminf_{N'\to \infty}\int_0^{1} \left( \int_{z>0} u_{N'}(s,z)\left( \partial_z \log u_{N'}(s,z) \right)^2 {{\, \rm d}}z - u_{N'}(s,0) ^2 \right) {{\, \rm d}}s \\ \geq \int_0^{1} \left( \int_{z>0} u_{\infty}(s,z)\left( \partial_z \log u_{\infty}(s,z) \right)^2 {{\, \rm d}}z - u_{\infty}(s,0) ^2 \right) {{\, \rm d}}s \geq 0 \, .
$$ We have used the $L^2(0,T;L^\infty({\mathbb{R}}_+))$ strong convergence (\[eq:limit\]) to pass to the limit in the nonlinear term $u_{N'}(s,0)^2$, and also the convexity of the functional $\int_{z>0} f(z) \left( \partial _z \log f(z) \right)^2 {{\, \rm d}}z$ (see Section \[sec:aubin\]).
#### Identification of the limit
We deduce that $u_\infty$ satisfies almost everywhere $$u_\infty(s,z) = \beta(s) \exp( - \alpha(s) z)\,,\quad \alpha(s),\beta(s) >0\, .$$ To determine $\alpha(s)$ and $\beta(s)$ we shall use the conservations of mass and first momentum. Since the first momentum is uniformly bounded, we have that $M = \lim \int_{z>0} u_{N'}(t,z){{\, \rm d}}z = \int_{z>0} u_{\infty}(t,z) {{\, \rm d}}z$. This yields $\alpha(s) = \beta(s)$.
We have proved so far that we can always extract a subsequence such that $u_{N'}(s,z)$ approaches $u_\infty(s,z)$ in $L^2(0,T;B)$. We explain below why it is delicate to derive $\alpha(s) = \alpha = {\mathbf{J}}(0)^{-1}$ without any better control of the density $n(t,z)$ as $z\to +\infty$. Suppose we have $\alpha(s)\equiv \overline \alpha$ and the convergence $u_N(s,z)\to u_\infty(z)$ is uniform. We would have on the one hand, $$\alpha^{-1} = \liminf \int_{z>0} z u_{N'}(t,z){{\, \rm d}}z \geq \int_{z>0} z u_{\infty}(z) {{\, \rm d}}z = (\overline\alpha)^{-1} \, ,
\label{eq:weak convergence moment}$$ and on the other hand, $$0\leq \lim {\mathbf{H}}(t) = \int_{z>0} u_\infty(z) \log u_\infty(z){{\, \rm d}}z + 1 - \log \alpha = \log\overline\alpha - \log\alpha \, .$$ We would deduce $\overline \alpha\geq\alpha$ which is the same as (\[eq:weak convergence moment\]).
In the case $\int_{z>0} z^2 n_0(z){{\, \rm d}}z < +\infty$, let us examinate the evolution of the second momentum. We simply have $$\frac{1}{2}\frac {{{\, \rm d}}}{{{\, \rm d}}t} \int_{z>0} z^2 n(t,z){{\, \rm d}}z = M - n(t,0) {\mathbf{J}}(t) = 1 - n(t,0) \alpha^{-1}\, . \label{eq:evol I}$$ The idea is to pass to the pointwise limit $n(t,0)\to \overline \alpha$. If $\overline \alpha> \alpha$, the right-hand side of (\[eq:evol I\]) becomes asymptotically $1 - \overline \alpha\alpha^{-1} < 0$ which leads to a contradiction.
Let introduce the notation ${\mathbf{I}}(t) = \int_{z>0} (z^2/2) n(t,z){{\, \rm d}}z$. We have $${\mathbf{I}}(N+1) - {\mathbf{I}}(N) = \int_N^{N+1} \left(1 - n(t,0)\alpha^{-1}\right) {{\, \rm d}}t = \int_0^{1} \left(1 - u_N(s,0)\alpha^{-1}\right) {{\, \rm d}}s\, .$$ Since ${\mathbf{I}}$ is a non-negative quantity, we clearly have $ \limsup {\mathbf{I}}(N+1) - {\mathbf{I}}(N) \geq 0$. Furthermore we have $ \limsup {\mathbf{I}}(N+1) - {\mathbf{I}}(N) \leq 0$. To see this, assume on the contrary that $ \limsup {\mathbf{I}}(N+1) - {\mathbf{I}}(N) = \delta >0$. We can extract a converging subsequence. Keeping the same notations as above, we have $\lim {\mathbf{I}}(N'+1) - {\mathbf{I}}(N') = \delta$. We can pass to the limit similarly (up to further extracting) in the following average quantities: $$\begin{aligned}
\alpha^{-1} &=& \liminf \int_0^1 \int_{z>0} z u_{N'}(t,z){{\, \rm d}}z {{\, \rm d}}s \nonumber \\
&\geq & \int_0^1 \int_{z>0} z u_{\infty}(s,z) {{\, \rm d}}z {{\, \rm d}}s= \int _0^1(\alpha(s))^{-1}{{\, \rm d}}s\, , \label{eq:lim:crit:1} \\
\delta &=& \lim \int_0^{1} \left(1 - u_{N'}(s,0)\alpha^{-1}\right) {{\, \rm d}}s = 1 - \alpha^{-1} \int _0^1 \alpha(s){{\, \rm d}}s\, . \label{eq:lim:crit:2}\end{aligned}$$ Inequality (\[eq:lim:crit:1\]) yields $\int_0^1 \alpha(s){{\, \rm d}}s \geq \alpha $ by Jensen’s inequality. This is in contradiction with (\[eq:lim:crit:2\]).
We conclude that $\limsup {\mathbf{I}}(N+1) - {\mathbf{I}}(N) = 0$. We extract a converging subsequence, such that $\lim {\mathbf{I}}(N'+1) - {\mathbf{I}}(N') = 0$. Hence we obtain (\[eq:lim:crit:1\]) and (\[eq:lim:crit:2\]) with $\delta = 0$. The equality case in Jensen’s inequality yields $\alpha(s)\equiv \alpha$.
#### Asymptotic convergence (without any extraction)
We have proved that there exists a subsequence such that $u_{N'}$ converges towards $u_\infty = h_\alpha $ in $L^2(0,T;B)$. We cannot pass to the limit pointwise in time from $L^2$ convergence. However there exists a sequence of times $s_{N'}\in (0,1)$ such that $\|u_{N'}(s_{N'}) - u_\infty\|_{B}\to 0$. This includes uniform convergence and uniform decay at infinity. We can pass to the limit in the entropy term and we obtain $H(u_{N'}(s_{N'})) \to H(u_\infty) = 0$. It means $H[n(N' + s_{N'})] \to 0$. Using the non-increasing property of the entropy we have $H[n(t)] \to 0$ as $t\to \infty$ (without extracting any subsequence).
Finally we recall the Csiszar-Kullback inequality [@Csiszar; @Kullback]. For any non-negative functions $f,g \in L^1({\mathbb{R}}_+)$ such that $\int_{x>0} f(x) {{\, \rm d}}x=\int_{x>0} g(x) {{\, \rm d}}x=1$, the following inequality holds true, $$\|f-g \|^2_{L^1({\mathbb{R}}_+)}\le 4 \int_{x>0} f(x)\log \left(\frac{f(x)}{g(x)}\right) {{\, \rm d}}x \, .$$ This yields $\|n(t) - h_{\alpha}\|_{L^1}\to 0$.
### Self-similar decay in the sub-critical case {#sec:self-similar}
In the sub-critical case $M<1$ the density $n(t,z)$ is expected to decay with a self-similar diffusion scaling [@BDP]. To catch this asymptotic behaviour we rescale the density accordingly: $$n(t,z) = \frac{1}{\sqrt{1+2t}} u\left( \log\sqrt{1+2t} , \frac{z}{\sqrt{1+2t}}\right)\, .$$ The new density $u(\tau,y)$ satisfies: $$\label{eq:rescaled}
\partial_\tau u(\tau,y) = \partial_{yy} u(\tau,y) + \partial_y \left(y u(\tau,y) \right) + u(\tau,0)\partial_y u(\tau,y) \, ,$$ together with a no-flux boundary condition: $\partial _y u (\tau,0)+u(\tau,0)^2 =0$. The additionnal left-sided drift contributes to confine the mass in the new frame $(\tau,y)$. The unique stationary equilibrium in this new setting can be computed explicitely: $$\label{eq:stat state rescaled}
G_\alpha(y) = \alpha\exp\left(-\alpha y - {y^2}/2\right)\, ,$$ where $\alpha$ is uniquely defined by the condition $\int_{y>0} G_\alpha(y){{\, \rm d}}y = M$. This rewrites $P(\alpha) = M$, $P$ being an increasing function defined as follows: $$P(\alpha) = \int_{y>0} \exp\left(- y - \frac{y^2}{2\alpha^2}\right){{\, \rm d}}y \, , \quad \left\{\begin{array}{r} \lim_{\alpha \to 0} P(\alpha) = 0 \\ \lim_{\alpha \to +\infty} P(\alpha) = 1 \end{array}\right. \, .$$ We re-define the relative entropy and the first momentum in the rescaled frame: $$\begin{aligned}
{\mathbf{H}}(\tau) &=& \int_{y>0}\frac{u(\tau,y)}{G_\alpha (y)} \log \left(\frac{u(\tau,y)}{G_\alpha (y)}\right)G_\alpha (y){{\, \rm d}}y\, ,\\
{\mathbf{J}}(\tau) &=& \int_{y>0} y u(\tau,y){{\, \rm d}}y\, . \end{aligned}$$ We also introduce a Lyapunov functional for equation (\[eq:rescaled\]): $${\mathbf{L}}(\tau)={\mathbf{H}}(\tau) + \frac{1}{2(1-M)} \left( {\mathbf{J}}(\tau) - \alpha(1-M) \right)^2 \, .$$ Note that it is a non-negative quantity by Jensen’s inequality.
\[lem:entropy u\] The Lyapunov functional ${\mathbf{L}}$ is non-increasing: $$\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{L}}(\tau) = - {\mathbf{D}}(\tau) \leq 0\, .$$ The dissipation reads as follows $$\begin{aligned}
{\mathbf{D}}(\tau) &=& \int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) + y + u(\tau,0) \right)^2{{\, \rm d}}y \nonumber \\
& & + \frac{1}{(1-M)}\left(\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau)\right)^2\, .
\label{eq:dissipation u}
\end{aligned}$$
We compute the evolution of the entropy as previously: $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{H}}(\tau)&=& \int_{y>0} \partial _\tau u(\tau,y)\left( \log (u(\tau,y)) +\alpha y + \frac{y^2}{2} \right) {{\, \rm d}}y \nonumber \\
&=& - \int_{y>0} \left( \partial_y u(\tau,y) + u(\tau,0) u(\tau,y) + y u(\tau,y) \right) \left( \frac{ \partial_y u(\tau,y)}{u(\tau,y)} +\alpha + y \right) {{\, \rm d}}y \nonumber \\
& =& - \int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) + y \right)^2{{\, \rm d}}y +
u(\tau,0)^2 - u(\tau,0) {\mathbf{J}}(\tau) \nonumber \\
& & \qquad +\alpha u(\tau,0) - \alpha {\mathbf{J}}(\tau) - \alpha u(\tau,0) M \nonumber\\
& = & - \int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) + y +u(\tau,0) \right)^2{{\, \rm d}}y \nonumber\\
& &\qquad +
(M-1)u(\tau,0)^2 + u(\tau,0) {\mathbf{J}}(\tau) +\alpha (1-M) u(\tau,0) - \alpha {\mathbf{J}}(\tau) \, . \label{eq:u0}\end{aligned}$$ Moreover, the time evolution of the first momentum reads in the rescaled frame: $$\frac{{{\, \rm d}}}{{{\, \rm d}}\tau}{\mathbf{J}}(\tau) = (1-M)u(\tau,0) - {\mathbf{J}}(\tau)\, .$$ As compared to (\[eq:moment0\]) the additional contribution is due to the rescaling drift. We can eliminate $u(\tau,0)$ from (\[eq:u0\]) in the two following steps: $$\begin{aligned}
u(\tau,0) {\mathbf{J}}(\tau) +\alpha (1-M) u(\tau,0) - \alpha {\mathbf{J}}(\tau)
& = &
\frac{{\mathbf{J}}(\tau)}{(1-M)}\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau)+ \frac{{\mathbf{J}}(\tau)^2 }{(1-M)}+ \alpha \frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau) \nonumber \\
& =& -\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} \frac{\left( {\mathbf{J}}(\tau) - \alpha(1-M) \right)^2}{2(1-M)}
\nonumber\\
& &
+ \frac{2{\mathbf{J}}(\tau)}{(1-M)}\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau)+ \frac{{\mathbf{J}}(\tau)^2}{(1-M)} \, ,
\label{eq:entropy J}\end{aligned}$$ and $$- \frac{1}{(1-M)}\left(\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau)\right)^2
=(M-1)u(\tau,0)^2 +\frac{2{\mathbf{J}}(\tau)}{(1-M)}\frac{{{\, \rm d}}}{{{\, \rm d}}\tau} {\mathbf{J}}(\tau)+ \frac{{\mathbf{J}}(\tau)^2 }{(1-M)}
\, . \label{eq:dissipation J}$$ Combining (\[eq:u0\]) – (\[eq:entropy J\]) – (\[eq:dissipation J\]) the proof of Lemma \[lem:entropy u\] is complete.
To prove convergence of $u(\tau,\cdot)$ towards $G_\alpha$ we develop the same strategy as in Section \[sec:M=1\] for the critical case $M = 1$. The main argument (apart from passing to the limit) consists in identifying the possible configurations $u_\infty$ for which the dissipation ${\mathbf{D}}$ vanishes. In fact this occurs if and only if both terms in (\[eq:dissipation u\]) are zero. This means that ${\mathbf{J}}_\infty(\tau) = (1 - M) u_\infty(\tau,0)$ on the one hand, and on the other hand, $$\partial_y \log u_\infty(\tau,y) + y + u_\infty(\tau,0) = 0 \, .$$ We obtain that $u_\infty \equiv G_\alpha$, where $G_\alpha$ is given by (\[eq:stat state rescaled\]). To pass to the limit as in Section \[sec:M=1\] we need to gain some good control of $\int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) \right)^2{{\, \rm d}}y$ from the dissipation term ${\mathbf{D}}$. The situation here is simpler than in Section \[sec:M=1\] since the mass is sub-critical. The argument goes as follows $$\begin{aligned}
& &\int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) + y + u(\tau,0) \right)^2{{\, \rm d}}y
\\ &= &
\int_{y>0} u(\tau,y) \left( \partial_y \log u(\tau,y) \right)^2{{\, \rm d}}y + (M-2) u(\tau,0)^2 + 2 u(\tau,0) {\mathbf{J}}(\tau )\\
& & + \int_{y>0} y^2 u(\tau,y){{\, \rm d}}y - 2M \\
& \geq & \left(M + \frac{1}{M}-2\right) u(\tau,0)^2 - 2M \, ,\end{aligned}$$ where we have used inequality (\[ineq:trace:0\]). The quantity $M + M^{-1} - 2$ is positive since $M<1$. Hence, recalling Proposition \[apriori\] we can prove directly that $u(\cdot,0)$ belongs to $L^2$ locally in time (this was the purpose of (\[eq:trace Lambda:ajout:nico\]) – (\[eq:L2 n(t,0):ajout:nico\])).
Finally, we obtain that ${\mathbf{L}}$ converges to zero as $\tau\to+\infty$. So $u(\tau,\cdot)$ converges towards $G_\alpha$ in entropy sense.
Blow-up of solutions for super-critical mass {#sec:BU}
--------------------------------------------
To prove that solutions blow-up in finite time when mass is super-critical $M>1$ and $n_0$ is non-increasing, we show that the first momentum of $n(t,z)$ cannot remain positive for all time. This technique was first used by Nagai [@Nagai], then by many authors in various contexts (see [@Biler95; @BilerWoyczynski; @Corrias.Perthame.Zaag; @DP; @CieslakLaurencot] for instance).
The assumption that $n_0$ is a non-increasing function guarantees that $n(t,\cdot)$ is also non-increasing for any time $t>0$ due to the maximum principle. In fact the derivative $v(t,z) = \partial_z n(t,z)$ satisfies a parabolic type equation without any source term, it is initially non-positive, and it is non-positive on the boundary due to (\[cl1D\]).
Therefore $-\partial_z n(t,z)/n(t,0)$ is a probability density at any time $t>0$. We deduce from the Jensen’s inequality the following interpolation estimate: $$\left(\int_{z>0} z \frac{-\partial_z n(t,z)}{n(t,0)}{{\, \rm d}}z\right)^2 \leq
\int_{z>0} z^2 \frac{-\partial_z n(t,z)}{n(t,0)}{{\, \rm d}}z\, .$$ It rewrites in a more convenient way as follows, $$\label{interpol}
M^2 \leq 2 n(t,0) \int_{z>0} z n(t,z) {{\, \rm d}}z \, .$$
We denote the first momentum ${\mathbf{J}}(t) = \int_{z>0} z n(t,z){{\, \rm d}}z$. We plug (\[interpol\]) into the evolution of the moment (\[faible11\]): $$\begin{aligned}
{\mathbf{J}}(t)& =& {\mathbf{J}}(0) + (1 - M) \int_0^t n(s,0){{\, \rm d}}s \nonumber \\
& \leq &{\mathbf{J}}(0) + \frac{(1 - M)M^2}{2} \int_0^t \frac{1}{{\mathbf{J}}(s)}{{\, \rm d}}s\label{eq:moment0}\, .\end{aligned}$$ We introduce the auxiliary function ${\mathbf{K}}(t) = {\mathbf{J}}(0) + (1 - M)M^2 \int_0^t {\mathbf{J}}(s)^{-1}{{\, \rm d}}s$. It is positive and it satisfies the following differential inequality: $$\frac{{{\, \rm d}}}{{{\, \rm d}}t}{\mathbf{K}}(t) = \frac{(1 - M)M^2}{2} \frac{1}{{\mathbf{J}}(t)} \leq \frac{(1 - M)M^2}{2} \frac{1}{{\mathbf{K}}(t)} \, ,$$ hence, $$\frac{{{\, \rm d}}}{{{\, \rm d}}t}{\mathbf{K}}(t)^2 \leq (1 - M)M^2\, .$$ We obtain a contradiction: the maximal time of existence $T^*$ is necessarily finite when $M>1$. On the other hand, following [@JL], it can be proved that the modulus of integrability has to become singular at $T^*$: $$\lim_{K\to +\infty} \left( \sup_{t\in(0,T^*)} \int_{z>0} (n(t,z)-K)_+{{\, \rm d}}z\right) >0\, .$$ Otherwise a truncation method enables to prove local existence by replacing $n$ with $(n - K)_+$ for $K$ sufficiently large.
It is natural to perform the Laplace transform on the equation (\[eq:1D\]) $\mathcal L_z(n(t,z)) = \hat n(t,\zeta)=\int_{z>0} n(t,z) \exp(-\zeta z) {{\, \rm d}}z$. Then the occurence of blow-up is clear after transformation. We refer the reader to [@Calvez.Carrillo2010] where the Fourier transformation has been applied successfully to analysing a one-dimensional caricature of the two-dimensional Keller-Segel equation.
Variants of blow-up criteria {#secvariants}
============================
In this section we determine necessary conditions for blow-up to occur for a fast decaying interaction potential (Section \[finite\_range\]) and for a finite interval (Section \[bounded\]).
Finite range of action {#finite_range}
----------------------
In this part we consider the following system: $$\label{eqfini:fr}
\partial _t n(t,z) = \partial _{zz} n(t,z) - \partial_z\left(n(t,z) \partial_z \phi(t,z)\right)\, , \quad t>0\, , \, z\in (0,+\infty)\, ,$$ with zero-flux at $z = 0$ and the attractive potential is given by $$\label{eq:phi alpha}
-\partial_{zz} \phi (t,z) + \alpha^2 \phi (t,z) = 0\, , \quad - \partial_z \phi (t,0) = n(t,0)\, .$$ We introduce the exponential moment of the solution: $${\mathbf{J}}_\alpha(t) = \int_{z>0} \exp(\alpha z) n(t,z) {{\, \rm d}}z\, .$$
\[prop:1D:FR\] Assume $M>1$ and the exponential moment is small in the sense of criterion (\[eq:BU alpha\]) below. Assume in addition that $\exp(-\alpha z) n_0(z)$ is a non-increasing function. Then the solution to (\[eqfini:fr\]) – (\[eq:phi alpha\]) with initial data $n(0,z) = n_0(z)$ blows-up in finite time.
The attractive field is given by $\partial_z \phi(t,z) = -\exp(-\alpha z) n(t,0)$. Similarly to the proof of Theorem \[th:1D BU\], we compute the time derivative of $ {\mathbf{J}}_\alpha(t)$: $$\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{J}}_\alpha (t) = \alpha ^2 {\mathbf{J}}_\alpha(t) + \alpha n(t,0)(1-M)\, .$$ We check that the function $u(t,z) = \exp(-\alpha z )n(t,z)$ is decreasing w.r.t. $z$ for all time $t>0$. For this purpose we write the equation for $v(t,z) = \partial_z u(t,z)$. This reads as follows $$\begin{aligned}
\partial_t u(t,z) &=& \partial_{zz} u(t,z) + 2 \alpha \partial_{z} u(t,z) + \alpha^2 u(t,z) + \exp(-\alpha z) n(t,0) \partial_z u(t,z)\, , \\
\partial_t v(t,z) &=& \partial_{zz} v(t,z) + 2 \alpha \partial_{z} v(t,z) + \alpha^2 v(t,z) + \exp(-\alpha z) n(t,0) \partial_z v(t,z) \\
& &- \alpha \exp(-\alpha z) n(t,0) v(t,z) \, .\end{aligned}$$ Since the boundary condition reads $v(t,0) = -\alpha n(t,0)- n(t,0)^2\leq 0$ and the above parabolic equation preserves non-positivity we deduce that $v(t,z)\leq 0$ if $v(0,z)\leq 0$.
We can adapt the inequality (\[interpol\]) to the function $u(t,z)$ and we obtain $$\begin{aligned}
M^4 &\leq &\left(\int_{z>0} \exp(\alpha z) n(t,z) {{\, \rm d}}z\right)^2 \left(\int_{z>0} u(t,z) {{\, \rm d}}z\right)^2 \\
&\leq & {\mathbf{J}}_\alpha(t)^2 n(t,0)^2 \left(\int_{z>0} z \frac{-\partial_z u(t,z)}{u(t,0)} {{\, \rm d}}z\right)^2\\
& \leq & {\mathbf{J}}_\alpha(t)^2 n(t,0)^2 \int_{z>0} \left(\frac{\exp(2\alpha z) - 1 - 2\alpha z}{2\alpha^2}\right) \frac{-\partial_z u(t,z)}{u(t,0)} {{\, \rm d}}z \\
& \leq &\frac{1}{\alpha} {\mathbf{J}}_\alpha(t)^2 n(t,0) \int_{z>0} \left( \exp (\alpha z)- \exp (-\alpha z) \right) n(t,z){{\, \rm d}}z \\
& \leq & \frac{1}{\alpha} {\mathbf{J}}_\alpha(t)^2 n(t,0) \left( {\mathbf{J}}_\alpha(t) - \frac {M^2}{ {\mathbf{J}}_\alpha(t)}\right) \, . \end{aligned}$$ Finally, when $M>1$ we obtain that: $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t}{\mathbf{J}}_\alpha(t) & \le &\alpha^2 {\mathbf{J}}_\alpha(t) + \frac{\alpha^2(1 - M)M^4}{ {\mathbf{J}}_\alpha(t)^3 \left( 1 - \frac{M^2}{{\mathbf{J}}_\alpha(t)^2}\right) } \, . \end{aligned}$$ Notice that $ {\mathbf{J}}_\alpha(0) >M$ by definition. We get an obstruction to global existence if the following condition holds true, $$\label{eq:BU alpha}
\frac{ {\mathbf{J}}_\alpha(0)^4}{M^4} \left( 1 - \frac {M^2}{ {\mathbf{J}}_\alpha(0)^2} \right) < (M-1) \, .$$
Finite interval {#bounded}
---------------
In this part we consider the equation (\[eq1D\]) on a finite interval $(0,L)$ for some $L>0$, namely, $$\label{eqfini}
\partial _t n(t,z) = \partial _{zz} n(t,z) + (n(t,0)-n(t,L)) \partial _z n(t,z)\, , \quad t >0\, , \, x\in (0,L)\, ,$$ together with $n(t=0,z) = n_0(z)\geq 0$ and zero-flux boundary conditions at both sides of the interval.
Equilibrium configurations are given by the family of functions: $$\label{eq:h (0,L)}
h(z) = \alpha \exp(- (\alpha - \beta)z )\, , \quad \beta = \alpha \exp(- (\alpha - \beta)L)\, .$$ There are two possibilities, either $\alpha = \beta$ and $h$ is constant, or $\alpha \neq \beta$ and $M = \int_0^L h(z) {{\, \rm d}}z = 1$. Observe that given $\alpha >0$ there exists a unique $\beta$ satisfying (\[eq:h (0,L)\]). If $\alpha L < 1$ then $\beta >\alpha$ ($h$ is increasing), whereas if $\alpha L >1$ then $\beta < \alpha$ ($h$ is decreasing).\
\[faibleBDP1:finite\] Assume $M>1$ and the first moment is small: $4{\mathbf{J}}(0) < L M$. Assume in addition that $ n_0(z)$ is a non-increasing function. Then the solution to (\[eqfini\]) with initial data $n(0,z) = n_0(z)$ blows-up in finite time.\
We proceed again as in the proof of theorem (\[th:1D BU\]). From Jensen’s inequality, it follows that: $$\left(\int_{0}^L z \frac{-\partial_z n(t,z)}{n(t,0)-n(t,L)}{{\, \rm d}}z\right)^2 \leq
\int_{0}^L z^2 \frac{-\partial_z n(t,z)}{n(t,0)-n(t,L)}{{\, \rm d}}z\, ,$$ hence, using that $n(t,0)>n(t,L)$ for any time $t>0$, we deduce that $$\label{interpolfini}
(M-Ln(t,L))^2 \leq (n(t,0)-n(t,L)) \Big( 2\int_{0}^L z n(t,z) {{\, \rm d}}z-L^2 n(t,L)\Big) \, ,$$ and the inequality remains true when $n(t,0)=n(t,L)$ and $n(t,\cdot)$ is constant. Therefore, the first momentum ${\mathbf{J}}(t) = \int_{0}^L z n(t,z){{\, \rm d}}z$ satisfies: $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{J}}(t) & =& (1-M)(n(t,0) - n(t,L)) \\
& \leq &(1-M) \frac{(M - Ln(t,L))^2}{2{\mathbf{J}}(t) - L^2 n(t,L)} \\
& \leq &(1-M) \frac{M^2 - 2M L n(t,L)}{2{\mathbf{J}}(t)}\, .\end{aligned}$$ On the other hand, from (\[interpolfini\]) again, it follows that $2{\mathbf{J}}(t) \ge L^2 n(t,L)$ and we deduce that $$\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{J}}(t) \leq \frac{M(1-M)}{2{\mathbf{J}}(t)} \left(M - \frac{4{\mathbf{J}}(t)}{L} \right)\, ,$$ and the result follows by contradiction as in Section \[sec:BU\].
The model with with dynamical exchange of markers at the boundary: prevention of blow-up and asymptotic behaviour {#sec:ODE/PDE}
=================================================================================================================
In Section \[sec:BU\], we proved that finite blow-up occurs in the basic model (\[eq:1D\]) when mass is super-critical $M>1$. On the other hand the model which was originally proposed in [@HBPV] is the following: $$\left\{\begin{array}{l}
\partial _t n (t,z)= \partial _{zz} n (t,z) + \mu (t) \partial _z n (t,z) \, , \quad t >0\, , \, z\in (0,+\infty) \medskip \\
\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t)= n(t,0)- \mu(t)\, ,
\end{array}\right.$$ together with the flux condition at the boundary: $$\partial _z n (t,0)+ \mu(t) n (t,0)=
\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t)\, . \label{eq:BC2dim}$$ The quantity $\mu$ represents the concentration of markers which are sticked to the boundary and thus create the attracting drift. The dynamics of $\mu$ is driven by simple attachment/detachment kinetics. The mass of molecular markers is shared between the free particles $n(t,z)$ and the particles on the boundary $\mu(t)$. The boundary condition (\[eq:BC2dim\]) guarantees conservation of the total mass: $$\int_{z>0} n(t,z){{\, \rm d}}z + \mu(t) = M\, . \label{eq:mass conservation dim}$$ From (\[eq:mass conservation dim\]), we easily deduce that finite time blow-up cannot occur since the drift $\mu(t)$ is bounded by $M$. We denote by $m(t)$ the mass of free particles: $$m(t) = \int_{z>0} n(t,z){{\, \rm d}}z\, .$$ The conservation of mass reads $$\frac{{{\, \rm d}}}{{{\, \rm d}}t}m(t) + \frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t) = 0\, .$$ We re-define the relative entropy as follows: $${\mathbf{H}}(t)= \int_{z>0}\frac{n(t,z)}{m(t)h (z)} \log \left(\frac{n(t,z)}{m(t)h(z)}\right)h (z){{\, \rm d}}z\, ,$$ where the asymptotic profile $h$ is given by: $$h(z) = \nu \exp\left(-\nu z\right)\, , \quad \nu = M-1\, .$$ When mass is super-critical $M>1$, we shall prove that the density of free markers $n(t,z)$ converges in relative entropy towards $h$, whereas the concentration of markers sticked at the boundary $\mu(t)$ converges to $\nu$. This is achieved using a suitable Lyapunov functional as in Sections \[sec:M=1\] and \[sec:self-similar\]. We introduce accordingly $${\mathbf{L}}(t) = m(t){\mathbf{H}}(t) + \frac{1}{2}\left(\mu(t) - {\nu}\right)^2 + \mu(t)\log\left(\frac{ \mu(t)}{\nu}\right) + m(t)\log m(t) \, .$$ The rest of this Section is devoted to the proof of the following Lemma.\
\[eq:dissipation mu\] The Lyapunov functional ${\mathbf{L}}$ is non-increasing: $$\frac{{{\, \rm d}}}{{{\, \rm d}}t} {\mathbf{L}}(t ) = - {\mathbf{D}}(t ) \leq 0\, .$$ The dissipation reads as follows $$\begin{aligned}
{\mathbf{D}}(t) &=& \int_{z>0} n(t,z) \left(\partial _z\log n(t,z) + \frac{n(t,0)}{m(t)}\right)^2{{\, \rm d}}z + m(t)\left(\frac{n(t,0)}{m(t)} - \mu(t) \right)^2\\
& &+ \left( n(t,0) - \mu(t) \right) \log \left(\frac{n(t,0)}{\mu(t)}\right) + \mu(t) \left( \mu(t)- {\nu}\right)^2\, .\end{aligned}$$
We compute below the time evolution of the relative entropy. This is strongly inspired from the previous computation, but this takes into consideration the non-conservation of mass for the free markers density and the additional dynamics of $\mu$. $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} \left( m(t){\mathbf{H}}(t)\right)
& =& \frac{{{\, \rm d}}}{{{\, \rm d}}t}\int_{z>0} n(t,z) \left( \log\left(\frac{n(t,z)}{m(t)}\right) -\log {\nu}+{\nu}z\right){{\, \rm d}}z\\
&= &\int_{z>0} \partial _t\left( n(t,z) \right)\left( \log\left(\frac{n(t,z)}{m(t)} \right) -\log {\nu}+{\nu}z\right){{\, \rm d}}z \\
& &\qquad+\int_{z>0} n(t,z)\, \partial _t \, \log \left(\frac{n(t,z)}{m(t)}\right){{\, \rm d}}z\\
&= & \int_{z>0}\partial _z\left( \partial _z n(t,z) +\mu(t) n(t,z) \right) \left( \log\left(\frac{n(t,z)}{m(t)} \right) +{\nu}z\right){{\, \rm d}}z\\
& &\qquad- \frac{{{\, \rm d}}}{{{\, \rm d}}t} m(t) \log {\nu}\, ,\end{aligned}$$ where we have used the identity $$\int_{z>0} n(t,z)\, \partial _t \, \log \left(\frac{n(t,z)}{m(t)}\right){{\, \rm d}}z= m(t) \int_{z>0} \partial_t\left(\frac{n(t,z)}{m(t)}\right){{\, \rm d}}z = 0\, .$$ We integrate by parts to get $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} \left( m(t){\mathbf{H}}(t)+m(t)\log {\nu}\right)
&=& - \int_{z>0}(\partial _z n(t,z) +\mu(t) n(t,z)) \left( \frac{\partial _z n(t,z)}{n(t,z)} +{\nu}\right){{\, \rm d}}z\\
& &\qquad - \left(\partial _z n (t,0)+\mu(t) n(t,0)\right) \log\left(\frac{n(t,0)}{m(t)} \right)\\
&=& - \int_{z>0} n(t,z) \left(\partial _z\log n(t,z)\right)^2{{\, \rm d}}z + ({\nu}+ \mu(t) ) n(t,0)\\
& &\qquad - m(t)\mu(t) {\nu}- \log\left(\frac{n(t,0)}{m(t)} \right)\frac{{{\, \rm d}}}{{{\, \rm d}}t} \mu(t)\, .\end{aligned}$$ We use again the following key identity $$\begin{aligned}
\int_{z>0} n(t,z) \left(\partial _z\log n(t,z)\right)^2{{\, \rm d}}z =
\int_{z>0} n(t,z) \left(\partial _z\log n(t,z) + \frac{n(t,0)}{m(t)}\right)^2{{\, \rm d}}z + \frac{n(t,0)^2}{m(t)}\, . \end{aligned}$$ We end up with the following expression for the dissipation of the corrected entropy, $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} \left( m(t){\mathbf{H}}(t) +m(t)\log {\nu}\right)
& = &- \int_{z>0} n(t,z) \left(\partial _z\log n(t,z) + \frac{n(t,0)}{m(t)}\right)^2{{\, \rm d}}z \\
& &\qquad - \frac{ n(t,0)^2}{m(t)} + ({\nu}+ \mu(t)) n(t,0) - m(t) \mu(t) {\nu}\\
& &\qquad - \log\left(\frac{n(t,0)}{m(t)}\right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t)\, .\end{aligned}$$ On the first hand, we have that $$\begin{aligned}
- \frac{ n(t,0)^2}{m(t)} + ({\nu}+ \mu(t)) n(t,0) - m(t) \mu(t) {\nu}& = &\left( - \frac{ n(t,0)}{m(t)} + {\nu}\right)\left( n(t,0) - m(t)\mu \right) \\
& = & - m(t)\left(\frac{n(t,0)}{m(t)} - \mu(t) \right)^2 \\
& & - ( \mu(t)- {\nu})\left( n(t,0) - m(t)\mu(t) \right) \, ,\end{aligned}$$ and on the other hand, we see that $$\begin{aligned}
& &- \log\left(\frac{n(t,0)}{m(t)}\right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t)\\
& & \quad =- \log\left(\frac{n(t,0)}{\mu(t)}\right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t)
- \log\left(\frac{\mu(t)}{m(t)}\right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t) \\
& & \quad =
- \left(n(t,0) - \mu(t)\right) \log\left(\frac{n(t,0)}{\mu(t)}\right)
- \log\left( \mu(t) \right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t) - \log\left( m(t) \right)\frac{{{\, \rm d}}}{{{\, \rm d}}t}m(t)
\\
& & \quad = - \left(n(t,0) - \mu(t)\right) \log\left(\frac{n(t,0)}{\mu(t)}\right)
\\
& &\qquad
- \frac{{{\, \rm d}}}{{{\, \rm d}}t}\left( \mu(t) \log \mu(t) - \mu(t) + m(t) \log m(t) - m(t) - \nu \log \nu + M \right) \, .\end{aligned}$$ The last contribution to be reformulated is $$\begin{aligned}
- ( \mu(t)- {\nu})\left( n(t,0) - m(t)\mu(t) \right)
& = &- ( \mu(t)- {\nu})\left( \frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t) + (1 - m(t)) \mu(t) \right)\\
& =& - ( \mu(t)- {\nu})\left( \frac{{{\, \rm d}}}{{{\, \rm d}}t}\mu(t) + (\mu(t) - {\nu})\mu(t) \right) \\
& = &- \frac{1}{2}\frac{{{\, \rm d}}}{{{\, \rm d}}t} ( \mu(t)- {\nu})^2 - \mu(t) (\mu(t) - {\nu}) ^2\, .\end{aligned}$$ Combining all these calculations we conclude the proof of Lemma \[eq:dissipation mu\]
Following the lines of Section \[sec:M=1\] we can prove that $\mu(t)$ converges to ${\nu}$, the partial mass $m(t)$ converges to 1, and the density $n(t,\cdot)$ converges to the stationary state $h$ as $t\to \infty$. We omit the details.
The higher dimensional case $N\geq 2$ {#secdimsup}
=====================================
In this section we investigate the possible behaviours of the equation (\[eq:2D model\]) in dimension $N\geq 2$ with the two possible choices (\[eq:u1\]) and (\[eq:u2\]) for the advection field.
Global existence
----------------
We give the proof of Theorem \[thdim2\]. Since many of the arguments are similar to the one-dimensional case, we only sketch the proof and focus on the propagation of $L^p$ bounds, which is the crucial [*a priori*]{} estimate as soon as entropy methods are lacking [@JL].
Let $n$ be a solution of (\[eq:2D model\]) with $\nabla\cdot {\mathbf{u}}\geq 0$ and ${\mathbf{u}}(t,y,0)\cdot {\mathbf{e}}_z = n(t,y,0)$. We see that $$\begin{aligned}
\frac{{{\, \rm d}}}{{{\, \rm d}}t} \int_{{\mathcal{H}}} n(t,x)^p {{\, \rm d}}x
&=& -p \int_{{\mathcal{H}}} \nabla n(t,x)^{p-1} \cdot \nabla n(t,x) {{\, \rm d}}x \nonumber
\\& & +p \int_{{\mathcal{H}}} \nabla n(t,x)^{p-1} \cdot \mathbf{u}(t,x)\, n(t,x) {{\, \rm d}}x\, .
\label{relation2}\end{aligned}$$ On the first hand, we have that $$-p \int_{{\mathcal{H}}} \nabla n(t,x)^{p-1} \cdot \nabla n(t,x) {{\, \rm d}}x = -\frac{4(p-1)}{p} \int_{{\mathcal{H}}}\left| \nabla n(t,x)^{p/2}\right|^2 {{\, \rm d}}x\, ,$$ and on the other hand, $$\frac{p}{p-1} \int_{{\mathcal{H}}} \nabla n(t,x)^{p-1} \cdot \mathbf{u}(t,x)\, n(t,x) {{\, \rm d}}x
= - \int_{{\mathcal{H}}} n(t,x)^{p} \left(\nabla\cdot {\mathbf{u}}\right){{\, \rm d}}x + \int _x n(t,y,0)^{p+1} {{\, \rm d}}y \, .$$ To estimate the two opposite trends in (\[relation2\]) we use the following Sobolev trace inequality [@Biezuner] and [@Nazaret]: there exists a constant $C_r$ such that for any non-negative $f\in W^{1,r}$ we have, $$\label{GS0}
\left( \int_{y\in {\mathbb{R}}^{N-1} } f (y,0)^{r^*} {{\, \rm d}}y \right)^{1/r^*} \le C_r \left(\int_{{\mathcal{H}}}\left| \nabla f(x)\right|^r {{\, \rm d}}x \right)^{1/r}\, ,$$ where $ r^*=\frac{(N-1)r}{N-r}$. Applying the previous inequality (\[GS0\]) with $f=n^s$, we obtain the estimates: $$\begin{aligned}
& &\int_{y\in {\mathbb{R}}^{N-1} } n(t,y,0)^{r^* s} {{\, \rm d}}y
\le C_r \left(\frac{2s}{p}\right)^{r^*} \left(\int_{{\mathcal{H}}}\left| \nabla n(t,x)^{p/2} \, n(t,x)^{s-\frac{p}{2}}\right|^r {{\, \rm d}}x \right)^{r^*/r}
\\
& & \qquad \leq C_r \left(\frac{2s}{p}\right)^{r^*} \left(\int_{{\mathcal{H}}}\left| \nabla n(t,x)^{p/2} \right| ^2 {{\, \rm d}}x \right)^{r^*/2} \left(\int_{{\mathcal{H}}} \left( n(t,x)^{s-\frac{p}{2}}\right) ^{\frac{2r}{2-r}} {{\, \rm d}}x \right)^{\frac{(2-r)r^*}{2r}}\, .\end{aligned}$$ We infer that $L^N$ is the critical space for global existence. Hence we choose $$\left(s-\frac{p}{2}\right)\frac{2r}{2-r}=N\, .$$ On the other hand, we also require that $$1 = \frac{r^*}{2}= \frac{1}{2} \frac{(N-1)r}{N-r}\, .$$ A straightforward computation leads to $$r=\frac{2N}{N+1}\, , \quad s=\frac{p+1}{2}\,,\quad r^*s = p+1\, ,\quad \frac{(2-r)r^*}{2r} = \frac{1}{N}\, .$$ Therefore we deduce that $$\frac{{{\, \rm d}}}{{{\, \rm d}}t} \int_{{\mathcal{H}}} n(t,x) ^p {{\, \rm d}}x
\le -\frac{4(p-1)}{p} \left( 1- C \|n(t)\|_{L^N}\right) \int_{{\mathcal{H}}}\left| \nabla n(t,x)^{p/2} \right|^2 {{\, \rm d}}x
\, .$$ The peculiar choice $p = N$ yields global existence if $\|n(0)\|_{L^N}$ is smaller than some explicit threshold as in [@Corrias.Perthame.Zaag].
Blow-up of solutions in the first case (\[eq:u1\]) {#eq:sec:BU u1}
--------------------------------------------------
We compute the evolution of the second momentum ${\mathbf{I}}(t)= \frac12 \int_{{\mathcal{H}}} |x|^2 n(t,x) {{\, \rm d}}x$ as for the classical Keller-Segel system (see [@P] and references therein): $$\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t}
= N M
-\int_{{\mathcal{H}}} z n(t,y,0) n(t,y,z) {{\, \rm d}}x\, .
$$ Next, define $M(t,y) =\int_{z>0} n(t,y,z){{\, \rm d}}z$. Under the assumption $\partial_z n(t,x) \leq 0$ for all $x\in {\mathcal{H}}$ and $t>0$, inequality (\[interpol\]) rewrites $$M(t,y)^2 \leq 2 n(t,y,0) \int_{z>0} z n(t,y,z) {{\, \rm d}}z \, .$$ We deduce that: $$\begin{aligned}
\label{relationJ1}
\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t} &\le& N M -\frac{1}{2}\|M(t,y)\|^2_{L^2}.\end{aligned}$$ By interpolation there exists a constant $C$ such that $$\label{eq:interp2D}
M^{\frac{N+3}{2}} \leq C {\mathbf{I}}(t)^{\frac{N-1}2}\|M(t,y)\|_{L^2}^2\, .$$ Indeed we have $$\begin{aligned}
M
&=& \int _{|y|< R} M(t,y) {{\, \rm d}}y + \int _{|y|>R} M(t,y) {{\, \rm d}}y \\
& \le & C R^{(N-1)/2} \left( \int _{R^{N-1} } M(t,y)^2 {{\, \rm d}}y\right)^{1/2} +R^{-2} \int _{R^{N-1} } |y|^2M(t,y) {{\, \rm d}}y \\
&\le & C R^{(N-1)/2} \|M(t,y)\|_{L^2} + R^{-2} {\mathbf{I}}(t)\, .\end{aligned}$$ Optimizing with respect to $R$ we get (\[eq:interp2D\]). Combining (\[eq:interp2D\]) and (\[relationJ1\]) we conclude that the solution blows-up in finite time if ${\mathbf{I}}(0)\leq C M^{\frac{N+1}{N-1}}$.
Blow-up in the second case (\[eq:u2\])
--------------------------------------
We recall the expression of the advection field in the potential case (\[eq:u2\]): $${\mathbf{u}}(t,x) = - \int_{y'\in {\mathbb{R}}^{N-1}} \frac{(y-y',z)}{\left(|y-y'|^2 + z^2\right)^{N/2}} n(t,y',0){{\, \rm d}}y'\, .$$ Therefore we have $$\begin{aligned}
\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t} & =& N M + \int_{\mathcal{H}}x\cdot\left( n(t,x) {\mathbf{u}}(t,x) \right){{\, \rm d}}x \\
& = &N M -\iint _{y,y'}\int _{z>0} \frac{y\cdot (y - y') + z^2}{\left(|y-y'|^2 + z^2\right)^{N/2}} n(t,y',0) n(t,y,z) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z \, .\end{aligned}$$ We use a symmetrization trick to evaluate the contribution of interaction: $$\begin{aligned}
& &\iint _{y,y'}\int _{z>0} \frac{y\cdot (y - y') }{\left(|y-y'|^2 + z^2\right)^{N/2}} n(t,y',0) n(t,y,z) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z =\\
& &
\frac{1}{2}\iint _{y,y'}\int _{z>0} \frac{y - y' }{\left(|y-y'|^2 + z^2\right)^{N/2}}\cdot \left(n(t,y',0) n(t,y,z) y - n(t,y,0) n(t,y',z) y'\right) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z \, .\end{aligned}$$
Let $f$ be a smooth positive function. Assume that we have both $\partial_z f(x) \leq 0$ and $$\forall z>0\, , \, \forall y \in {\mathbb{R}}^{N-1}\, , \, \forall h\in {\mathbb{R}}^{N-1} \quad (h\cdot y)\left( h\cdot \partial_{z}\nabla_y\log f(x) \right)\geq 0 \, . \label{eq:cond2}$$ Then for all $y,y' \in {\mathbb{R}}^{N-1}$ and for all $z>0$, the following inequality holds true: $$(y-y')\cdot\left( f( y',0) f(y,z) y - f( y,0) f( y',z)y'\right) \geq |y-y'|^2 f(y,z)f(y',z)\, . \label{eq:cond1}$$
Inequality (\[eq:cond1\]) rewrites as follows: $$(y - y')\cdot \left( \frac{f(y,z)}{f(y ,0)}\left(1 - \frac{f(y',z)}{f(y',0)}\right) y - \frac{f(y',z)}{f(y',0)}\left(1 - \frac{f(y,z)}{f(y,0)}\right) y' \right) \geq 0\, .$$ Since $\partial_z f(x) \leq 0$ we have both $f(y ,z)\leq f(y ,0)$ and $f(y' ,z)\leq f(y' ,0)$ for all $y,y',z$. Hence we are reduced to prove that the vector field $$\frac{ \frac{f(y ,z)}{f(y ,0)}}{1 - \frac{f(y ,z)}{f(y ,0)}} y \,,$$ is monotonic with respect to the $y$ variable. Computing the derivative with respect to $y$, it is straightforward to check that it is monotonic if (\[eq:cond2\]) is satisfied: $$\begin{aligned}
\nabla_y \left(\frac{ \frac{f(y ,z)}{f(y ,0)}}{1 - \frac{f(y ,z)}{f(y ,0)}} y\right)& = \left( \frac{ \frac{f(y ,z)}{f(y ,0)}}{\left(1 - \frac{f(y ,z)}{f(y ,0)}\right)^2}\right)\left( \frac{\nabla_y f(y,z)}{f(y,z)} - \frac{\nabla_y f(y,0)}{f(y,0)} \right) \otimes y \\
&\quad \quad + \left(\frac{f(y,z)}{f(y,0) - f(y,z)}\right) \mathrm{Id} \\
& \geq \left( \frac{ \frac{f(y ,z)}{f(y ,0)}}{\left(1 - \frac{f(y ,z)}{f(y ,0)}\right)^2}\right)\left( \int_{z' = 0}^z \partial_z \nabla _y\log f(y,z'){{\, \rm d}}z' \right) \otimes y \geq 0\, ,\end{aligned}$$ in the following matrix sense: $A^T + A \geq 0$.
Under the hypotheses of Theorem \[th2dim2\] we assume that conditions (\[eq:cond1\]) – (\[eq:cond2\]) are fulfilled for every time of existence. We deduce that $$\begin{aligned}
\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t}
&\leq & N M - \frac{1}{2} \iint _{y,y'}\int _{z>0} \frac{|y - y'|^2 + 2 z^2}{\left(|y-y'|^2 + z^2\right)^{N/2}} n(t,y',z) n(t,y,z) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z \\
& \leq & N M - \frac{1}{2} \iint _{y,y'}\int _{z>0} \frac{1}{\left(|y-y'|^2 + z^2\right)^{N/2-1}} n(t,y',z) n(t,y,z) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z \\
\end{aligned}$$ Since $|y - y'|^2 + z^2 \leq 2|y|^2 + 2|y'|^2 + z^2$, and $n$ is non-negative, we have $$\begin{aligned}
\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t} & \leq & N M - \frac{1}{2} \iiint _{\left\{|y|<\frac{ R}{3}, |y'|<\frac{ R}{3}, z < \frac{2R}{3}\right\}} R^{2-N} n(t,y',z) n(t,y,z) {{\, \rm d}}y {{\, \rm d}}y' {{\, \rm d}}z \\
& \leq & N M - \frac{R^{2-N}}{2} \int_{0<z<\frac{2R}{3}} \left( \int_{|y|<\frac{ R}{3}} n(t,y,z){{\, \rm d}}y \right)^2{{\, \rm d}}z \\
& \leq & N M - \frac{3 R^{1-N}}{4} \left( \int_{0<z<\frac{2R}{3}} \int_{|y|<\frac{ R}{3}} n(t,y,z){{\, \rm d}}y{{\, \rm d}}z \right)^2\, , \end{aligned}$$ where we have used the Cauchy-Schwarz inequality. We have therefore $$\begin{aligned}
\frac{{{\, \rm d}}{\mathbf{I}}(t) }{{{\, \rm d}}t} & \leq & N M - \frac{3 R^{1-N}}{4} \left( M - \iint _{\left\{z>\frac{2R}{3}\;\mbox {\footnotesize or}\;|y|>\frac{ R}{3}\right\}} n(t,y,z){{\, \rm d}}y{{\, \rm d}}z \right)^2\\
& \leq & N M - \frac{R^{1-N}}{2} M^2 + C R^{-N-3} {\mathbf{I}}(t)^{2}\, ,\end{aligned}$$ because $R^2 < 9|x|^2$ on $\left\{z>\frac{2R}{3}\;\mbox {\footnotesize or}\;|y|>\frac R3\right\}$. Optimizing with respect to $R$, we conclude that the solution blows-up in finite time if ${\mathbf{I}}(0)\leq C M^{\frac{N+1}{N-1}}$, similarly as in Section \[eq:sec:BU u1\].
Conclusion
==========
Here, we have demonstrated that a class of models following [@HBPV] exhibit pattern formation (either blow-up or convergence towards a non homogeneous steady state) under some conditions. However we have not answered the main question: do they describe cell polarisation or not? Although the one-dimensional case is clear (spontaneous polarisation occurs if the total concentration of markers is large enough), the higher-dimensional situation is not so clear. Obviously the first model (\[eq:u1\]) does not exhibit cell polarisation since we can integrate the equation (\[eq:2D model\]) with respect to $z$, and we obtain for $\nu(t,y) = \int_{z>0} n(t,y,z) {{\, \rm d}}z$: $$\partial_t \nu(t,y) = \partial_{yy} \nu(t,y)\, .$$ Thus there is no transversal instability which is the main feature of spontaneous cell polarisation, that leads to symmetry breaking. On the other hand the second model (\[eq:u2\]) is expected to develop symmetry breaking as the tangential component of the advective field on the boundary is given by the Hilbert transform of the trace $n(t,y,0)$ which is known to enhance finite time aggregation at least in one dimension of space [@CPS]. However there is no clear mathematical distinction between the two models as continuation after the blow-up time appears to be very delicate in a similar context [@V1; @V2; @DS]. It would be very interesting to make such a distinction beyond linear analysis as performed in [@HBPV]. We leave it as an open question.
[*Acknowledgement: The authors are very grateful to M. Piel, J. Van Schaftingen and J.J.L. Velázquez for stimulating discussion. VC and NM warmly thank the Centre de Recerca Matemàtica (Barcelona) for the invitation during the special semester “Mathematical Biology: Modelling and Differential Equations” (2009).*]{}
\#1[0=]{}
[10]{}
, [*Sobolev Spaces*]{}, Academic Press, New York, 1975.
, [*Molecular Biology of the cell*]{}, Garland Science, New York, 2002, 4th ed.
, [*Un théorème de compacité*]{}, C. R. Acad. Sci. Paris 256 (1963), pp. 5042–5044.
, [*Best constants in Sobolev trace inequalities*]{}, Nonlinear Analysis **54**, pp 575-589, 2003.
, [*Existence and nonexistence of solutions for a model of gravitational interaction of particle [III]{}*]{}, Colloq. Math. 68 (1995), pp. 229–239.
, [*Global and exploding solutions for nonlocal quadratic evolution problems*]{}, SIAM J. Appl. Math. 59 (1998), pp. 845–869.
, [*Two-dimensional [K]{}eller-[S]{}egel model: optimal critical mass and qualitative properties of the solutions*]{}, Electron. J. Differential Equations, (2006), pp. No. 44, 32 pp. (electronic).
, [*Asymmetric redistribution of GABA receptors during GABA gradient sensing by nerve growth cones analyzed by single quantum dot imaging*]{}, Proc Natl Acad Sci U S A, 104, pp. 11251 (2007).
, [*Refined asymptotics for the subcritical Keller-Segel system and related functional inequalities*]{}, preprint (2010).
, [*Blow-up, concentration phenomenon and global existence for the Keller-Segel model in high dimension*]{}, preprint (2010).
, [*A one-dimensional [K]{}eller-[S]{}egel equation with a drift issued from the boundary Boundary*]{}, C. R. Math. Acad. Sci. Paris, 348 (2010), pp. 629–634.
, [*Modified [K]{}eller-[S]{}egel system and critical mass for the log interaction kernel*]{}, in Nonlinear partial differential equations and related analysis, vol. 429 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2007.
, [*Finite time blow-up for a one-dimensional quasilinear parabolic-parabolic chemotaxis system*]{}, Ann. Inst. H. Poincaré Anal. Non Linéaire 27, no. 1, 437–446, 2010.
, [*Global solutions of some chemotaxis and angiogenesis systems in high space dimensions*]{}, Milano J. of Math. 72 (2004), pp. 1–29.
, [*Information-type measures of difference of probability distributions and indirect observations*]{}, Studia Sci. Math. Hungar., 2 (1967), pp. 299–318.
, [*Optimal critical mass in the two-dimensional [K]{}eller-[S]{}egel model in [$ R\sp 2$]{}*]{}, C. R. Math. Acad. Sci. Paris, 339 (2004), pp. 611–616.
, [*The two-dimensional Keller-Segel model after blow-up*]{}, Discrete and Continuous Dynamical Systems, 25 (2009), pp. 109–121.
, [*Rebuilding cytoskeleton roads: active transport induced polarisation of cells*]{}, Phys. Rev. E 80, 040903 (2009).
, [*Singularity formation in the one-dimensional supercooled Stefan problem*]{}. European J. Appl. Math. 7 (1996), no. 2, pp. 119–150.
, [*Chemotactic collapse for the Keller-Segel model*]{}. J. Math. Biol. 35 (1996), no. 2, pp. 177–194.
, [*Navigating through models of chemotaxis*]{}, Curr. Opin. Cell Biol. 20, 35 (2008).
, [*Opposing roles for actin in Cdc42p polarisation*]{}, Mol. Biol. Cell, 16, pp. 1296 (2005).
, [*On explosions of solutions to a system of partial differential equations modelling chemotaxis*]{}, Trans. Amer. Math. Soc., 329 (1992), pp. 819–824.
, [*On the convergence of discrimination information*]{}. IEEE Trans. Information Theory, IT-14, (1968), pp. 765–766.
, [*Directional sensing in eukaryotic chemotaxis: A balanced inactivation model*]{}, Proc Natl Acad Sci U S A 103, 9761 (2006).
, [*Quelques méthodes de résolution des problèmes aux limites non linéaires.*]{} Dunod; Gauthier-Villars, Paris 1969
, [*Blow-up of radially symmetric solutions to a chemotaxis system*]{}, Adv. Math. Sci., 5 (1995), pp. 581–601.
, [*Best constant in Sobolev trace inequalities on the half-space*]{}, Nonlinear Anal. 65 (2006), pp. 1977–1985.
, PLoS Comput Biol, 3, pp. e36 (2007).
, [*Blow-up of radially symmetric solutions to a chemotaxis system*]{}, Adv. Math. Sci., 5 (1995), pp. 581–601.
, [*Compact sets in the space $L^p(0, T\, ;\, B)$*]{}, Ann. Mat. Pura Appl. (4), 146 (1987), pp. 65–96.
, [*Point dynamics in a singular limit of the [K]{}eller-[S]{}egel model. [I]{}. [M]{}otion of the concentration regions*]{}, SIAM J. Appl. Math., 64 (2004), pp. 1198–1223 (electronic).
, [*Point dynamics in a singular limit of the [K]{}eller-[S]{}egel model. [II]{}. [F]{}ormation of the concentration regions*]{}, SIAM J. Appl. Math., 64 (2004), pp. 1224–1248 (electronic).
, [*Spontaneous Cell Polarization Through Actomyosin-Based Delivery of the Cdc42 GTPase*]{}, Science 299, pp. 1231 (2003).
, [*Robust cell polarity is a dynamic state established by coupling transport and GTPase signaling*]{}, J. Cell Biol., 166, pp. 889 (2004).
[^1]: Unité de Mathématiques Pures et Appliquées, CNRS UMR 5669 & équipe-projet INRIA NUMED, École Normale Supérieure de Lyon, 46 allée d’Italie, F-69364 Lyon, France. ([vincent.calvez@umpa.ens-lyon.fr]{})
[^2]: Laboratoire de la matière condensée, CNRS UMR 7600, Université Pierre et Marie Curie, 4 Place Jussieu, 75255 Paris Cedex 05 France ([rhoda@lptmc.jussieu.fr]{})
[^3]: MAP5, CNRS UMR 8145, Université Paris Descartes, 45 rue des Saints Pères 75006 Paris, France. ([nicolas.meunier@parisdescartes.fr]{})
[^4]: Laboratoire de la matière condensée, CNRS UMR 7600, Université Pierre et Marie Curie, 4 Place Jussieu, 75255 Paris Cedex 05 France ([voiturie@lptmc.jussieu.fr]{})
|
---
abstract: 'The widely used density matrix renormalization group (DRMG) method often fails to converge in systems with multiple length scales, such as lattice discretizations of continuum models and dilute or weakly doped lattice models. The local optimization employed by DMRG to optimize the wave function is ineffective in updating large-scale features. Here we present a multigrid algorithm that solves these convergence problems by optimizing the wave function at different spatial resolutions. We demonstrate its effectiveness by simulating bosons in continuous space, and study non-adiabaticity when ramping up the amplitude of an optical lattice. The algorithm can be generalized to tensor network methods, and be combined with the contractor renormalization group (CORE) method to study dilute and weakly doped lattice models.'
author:
- 'M. Dolfi'
- 'B. Bauer'
- 'M. Troyer'
- 'Z. Ristivojevic'
bibliography:
- 'bibliography.bib'
title: Multigrid Algorithms for Tensor Network States
---
The optimization of variational wave functions is generally a very difficult problem. In the specific case of matrix-product states (MPS) [@ostlund1995] the density-matrix renormalization group algorithm [@white1992; @white1992-1; @schollwoeck2005; @schollwock2011] often reliably and efficiently optimizes these wave functions to find a good approximation of the ground state. While most efficient in one dimension, it can be applied to medium-sized two-dimensional systems [@stoudenmire2012], and has been generalized to calculate time-dependent [@daley2004; @vidal2003; @white2004] and finite temperature properties [@verstraete2004-mpdo; @feiguin2005; @white2009].
In systems with multiple length scales, however, the DMRG algorithm often fails to converge, as the local optimizations that are at the core of DMRG are ineffective in optimizing large-scale features of the wave function. Especially in dilute systems where the inter-particle distance is large compared to the lattice spacing the convergence of the density profile can be very slow. Systems with multiple length scales suffering from this problem arise from lattice discretizations of continuum models [^1], or in weakly doped lattice models where the hole density exhibits the same convergence problems. The first situation was recently discussed in Ref. [@stoudenmire2011].
Similar convergence problems are also known in other fields, [*e.g.*]{} when solving partial differential equations [@stuben2001], lattice field theories [@goodman1986] or electronic structures [@heiskanen2001], and have there been overcome by [*multigrid*]{} approaches. Multigrid methods use a hierarchy of discretizations, as sketched in Fig. \[fig:lattice-fine-graining\]. Starting from the target problem on the finest grid (or a lattice model), the system is mapped to hierarchy of coarser grids. An approximate solution of the smallest problem on the coarsest grid is then used to initialize optimizations of the problem on the next finer grid and this process is iterated down to the finest grid. This method can substantially speed up a calculation since the large scale features converge quickly on the coarsest grid, and the following calculations on finer grids only need to optimize local features at the scale of the respective grid spacing.
In this Letter we develop a multigrid DMRG (MG-DMRG) algorithm to solve the above-mentioned convergence problems in DMRG calculations. As a first application and demonstration of the effectiveness of the algorithm we present results for bosons in continuous space where MG-DMRG enables the study of non-adiabaticities when slowly ramping up the amplitude of an optical lattice.
We start the description of the MG-DMRG algorithm by reviewing MPS wave functions on a chain of $L$ sites: $$\label{eq:mps}
{\left| \psi \right>} = \sum_{{\ensuremath{\mbox{\boldmath$ \sigma $}}}} A_1^{\sigma_1} A_2^{\sigma_2} \cdots A_L^{\sigma_L} {\left| {\ensuremath{\mbox{\boldmath$ \sigma $}}} \right>}$$ used in DMRG. They are characterized by a polynomial number $\propto LM^2$ of variational parameters, the $M\times M$ matrices $A_i^{\sigma_i}$. In one dimension a good approximation for low-energy states can be obtained by MPS wave functions with a fixed or at most polynomially growing $M$ [@verstraete2006-2; @schuch2008-1].
![Density profiles of a continuum system of bosons in an optical lattice consisting of $L=32$ unit cells. The top and middle panels shows the non-converged results obtained for $N=32$ grid points per unit cell after $12$ sweeps of the DMRG algorithm with two different initial states: initial state 1 is a random state, initial state 2 is obtained from an infinite-size growing procedure as implemented in the ALPS DMRG code [@bauer2011-alps]. The bottom panel shows the multigrid result. []{data-label="fig:density"}](dens_sweeps3){width="\linewidth"}
While optimizing MPS wave functions to obtain a variational estimate for the ground state is a hard non-linear problem, the DMRG algorithm is very effective in many cases. It iteratively optimizes one or two of the matrices $A_i^{\sigma_i}$ while keeping all other matrices fixed, and sweeps back and forth along a (quasi) one-dimensional system until convergence is achieved. For a recent review and implementation see Ref. [@schollwock2011]. It can, however, get trapped in local minima of this non-linear optimization problem, or become very slow especially for the dilute systems considered here. As an example see the badly converged density profiles obtained by standard DMRG approaches in Fig. \[fig:density\].
In our implementation of the MG-DMRG algorithm, we start by constructing the target lattice model and a hierarchy of models on coarser grids. Starting from the coarsest level we optimize the wave function and interpolate it to the next finer level, repeating this procedure until we reach the target system. Many generalizations are possible, for example iterating the procedure by going back to coarser levels, or starting from the finest instead of coarsest level.
The [*restriction*]{} operation maps a system to a coarser grid, merging $n$ (typically $n=2$) sites into one. The model, given by the Hamiltonian $H$ and defined in the local basis $\lbrace \sigma \rbrace$, is mapped to a restricted model $\widetilde H$ in a truncated local basis $\lbrace \widetilde \sigma \rbrace$ for the $n$ merged sites. The truncation, denoted by an isometry $T^{\widetilde \sigma}_{\sigma_1 \dots \sigma_n}$, is straightforward for continuum models and an approach for lattice models will be discussed below. Any error due to the truncation will be corrected when returning to finer scales, as long as we stay in the same phase. As illustrated in Fig. \[fig:coarse-graining\] the restriction transforms a matrix product state $A_1$, …, $A_n$ into $$\label{eq:coarse-graining}
\widetilde A^{\widetilde \sigma}_{\alpha_1\, \alpha_2} = A^{\sigma_1}_{\alpha_1\, \beta_1}\; A^{\sigma_2}_{\beta_1\, \beta_2} \cdots \; A^{\sigma_n}_{\beta_{n-1}\, \alpha_2} \; T^{\widetilde \sigma}_{\sigma_1 \dots \sigma_n}.$$
[*Prolongation*]{} is the inverse of restriction and maps a solution from a coarse grid to a finer one. The isometry is inverted and $T^{-1}$ replaces one index by $k$ new indices. From this tensor we can recover a (not unique) MPS representation by repeatedly applying singular value decompositions and splitting it into matrices $A_1$, …, $A_n$ (see Fig. \[fig:fine-graining\]). It has turned out to be useful to perform a standard DMRG update on the newly obtained matrices immediately after prolongation while keeping the rest of the system on the coarse-grained lattice [^2].
As a first application we apply MG-DMRG to bosons in a one-dimensional continuum system with an external optical lattice potential, $V(x) = V_0 \cos^2(k x)$, with $k=\pi/a$ and $a$ the size of a unit cell. The continuous-space Hamiltonian describing spinless bosons interacting through a $\delta$-potential in a system of $L$ unit cells and length $L a$ is $$\begin{aligned}
\label{eq:cont-hamil}
\hat H =& \int_0^{La}{\mathrm{d}x\,} \hat\psi^\dagger(x) \left[ -\frac{\hbar^2}{2m}{\frac{d^2 }{d x^2}} + V(x) \right] \hat\psi(x) \notag\\
&+ \frac{g}{2}\int_0^{La}{\mathrm{d}x\,} \hat\psi^\dagger(x) \hat\psi^\dagger(x) \hat\psi(x) \hat\psi(x),\end{aligned}$$ where a boson is created at position $x$ with the field operator $\hat\psi^\dagger(x)$, satisfying the usual commutation relations. We express energies in units of the recoil energy $E_r = \frac{\hbar^2 k^2}{2m}$. The interaction $g$ is conveniently parametrized by the dimensionless coupling $\gamma = mg / \hbar^2n$, where $n$ is the density.
In deep optical lattices the low-energy sector of the model can be mapped to an effective single band Hubbard model with one site per unit cell. We are, however, interested also in weak optical lattices and thus discretize the continuum model on a grid with $N$ points per unit cell and spacing $\Delta x = a / N$. To discretize the Hamiltonian on this lattice we replace the Laplacian by a second order finite difference approximation and replace field operators by lattice annihilation and creation operators $\hat\psi^\dagger(x=(i+1/2)\Delta x) = \frac{1}{\sqrt{\Delta x}}\hat c^\dagger_{i}$. We end up with a Hubbard-like model in a spatially varying potential: $$\begin{aligned}
\label{eq:hubbard-hamil}
\hat H(\Delta x) =& -t(\Delta x) \sum_{i} \left[\hat c^\dagger_{i} \hat c_{i+1} + \text{h.c.} \right] + \sum_{i} \mu_{i}(\Delta x) \hat n_{i} \notag\\
&+ \frac{U(\Delta x)}{2} \sum_i \hat n_{i} (\hat n_{i} - 1)\end{aligned}$$ with $t(\Delta x) = (\hbar^2 / 2m) / \Delta x^2$, $U(\Delta x) = g / \Delta x$, and $\mu_i(\Delta x) = V((i+1/2)\Delta x) + 2 (\hbar^2 / 2m) / \Delta x^2$. A similar lattice model can be formulated for fermions.
With this definition of the Hamiltonian for arbitrary $\Delta x$ the implementation of MG-DRMG is straightforward. The matrix elements of the isometry $T$ are $$T^{\widetilde \sigma}_{\sigma_1 \dots \sigma_n} = \frac{ \delta(\widetilde{\sigma}, \sigma_1+\dots+\sigma_n) }{\sqrt{ \sum_{\sigma_1', \dots} \delta(\widetilde{\sigma},\sigma_1' + \dots + \sigma_n') }},$$ where $\sigma_i$, $\sigma_i'$ and $\widetilde \sigma$ are particle-number eigenstates and we truncate the maximum occupation of a site at $N_\text{max}$, i.e. $\sigma_i,\widetilde{\sigma} \in \lbrace 0, \dots, N_\text{max} \rbrace$. Due to particle-number conservation, $T$ is a $N_\text{max} \times N_\text{max}^n$ block-diagonal matrix. Note that we start from a coarse-grained lattice and perform only prolongations.
As a benchmark for the multigrid algorithm we consider an optical lattice with $V_0/E_r = 6$ and $1/\gamma=0.1$ corresponding to the insulating phase [@buchler2003] at unit filling. Our MG-DMRG simulation were performed with up to $N = 128$ lattice sites per unit cell ($\Delta x = 0.0078125$) with $N_\text{max}=2$ keeping $M=200$ states and using $12$ sweeps of single-site updates at each level. We also perform standard DMRG simulations starting from either a random initial state, a state obtained from an infinite-size growing procedure, or a few steps of imaginary time evolution (not shown). The infinite-size growing procedure is commonly used to obtain good initial states for one-dimensional systems and has been proven to be very efficient in most cases. We use the implementation of the ALPS DMRG code [@bauer2011-alps], which performs the growing procedure on a state with very small bond dimension and increases the bond dimension linearly with the number of sweeps thereafter.
Fig. \[fig:density\] shows the density profile obtained with the three approaches. Clearly, the standard DMRG approaches are trapped in configurations with a globally non-homogeneous density distribution and further sweeps are not effective in redistributing the particles. The multigrid method, on the other hand, achieves a symmetric distribution, since it performs the optimization of the large-scale features on the initial coarse mesh with just $N=2$ sites per unit cell, where convergence is very fast. Subsequent calculations on finer lattices are initialized with the prolongated solution of the coarser lattice. This is already close to the ground state, and only the local fine-scale features of the wave function need to be optimized.
![Comparison of the energies in a bosonic optical lattice ($V_0/E_r = 6$, $1/\gamma = 0.1$) with $L=32$ unit cells discretized with increasing discretization $N=16,32,64,128$ obtained with different strategies: DMRG with initial state 1 optimizing an initial random state, DMRG with initial state 2 initializing the system with an infinite size procedure and linearly increasing the number of states (results obtained with the ALPS DMRG code [@bauer2011-alps]), MG-DMRG, and MG-DMRG combined with local optimization in the prolongations.[]{data-label="fig:energy"}](en_N_final){width="\linewidth"}
The better convergence of the MG-DMRG is also reflected in the energies shown in Fig. \[fig:energy\]. While for modest discretizations, standard approaches yield energies comparable to MG-DMRG, they encounter severe convergence problems for smaller values of $\Delta x$ where the difference between multiple scales of the dilute system become more and more important. The most reliable method is MG-DMRG combined with optimization in the prolongation.
![Heating due to finite ramping speeds in a system of length $L=16$ for $1/\gamma = 0.08$ with $M=400$. Different lines show the energy difference to the ground state while ramping up to $V_0/E_r=0\rightarrow 4 $. (inset) Ramping functions used in the time evolution: linear, $V_0(t)/E_r = 4 t / t_V$ (dashed); exponential, $V_0(t)/E_r = 4 [\exp(4t/t_V)-1] / [\exp(4)-1]$ (dotted); and $s$-like, $V_0(t)/E_r =4 \left[ 3(t / t_V)^2 - 2(t/t_V)^3 \right]$ (solid).[]{data-label="fig:optical_ramp"}](te_EV0_2){width="\linewidth"}
MG-DMRG opens new interesting applications for DMRG that have not been accessible before. As an example we combine MG-DMRG with time evolution [@daley2004; @vidal2003; @white2004], to study heating caused by non-adiabaticity when ramping up the amplitude of an optical lattice. We start from the ground state of a homogeneous system of length $L=16$ and $N=16$ grid points per unit cell calculated by MG-DMRG. We evolve it in time using a fourth-order Trotter decomposition with $\Delta t = 0.01 \hbar/E_r$. Non-adiabaticities due to ramping at a finite speed cause heating and we plot the energy difference to the ground state in Fig. \[fig:optical\_ramp\] for three different ramp profiles and several total ramping times. For the calculation of the ground state energies in weak optical lattices MG-DMRG was used. We observe that as the ramp speed decreases, differences in ramp shape are less important than the total ramping time, indicating that the exact shape of the ramp profiles play a minor role in experiments and experimentalists should focus on determining optimal ramping times.
In DMRG simulations of weakly doped $t$-$J$ or Hubbard ladder models [@Dagotto92; @Troyer96; @Noack97] the hole density shows similar convergence problems as seen above for dilute particle systems. In particular it has been observed that for six holes in more than $2\times 64$ sites the standard DMRG algorithm fails to distribute the three bound hole pairs evenly over the ladder [@Siller01], and MG-DMRG can be of use here. The restriction step of mapping the model to a coarser lattice is, however, not as straightforward as in continuum models. We propose to use the contractor renormalization (CORE) method [@core94] to find a good approximation of the model in the reduced Hilbert space of the coarser models, and to iterate this procedure in further restriction steps. For the specific case of doped ladder models, the first step maps 2-site rungs or 4-site plaquettes to a hardcore boson model for the hole pairs, or an extended plaquette model containing hole pairs, magnons, and holes [@Altman02]. Further restriction steps map to simpler bosonic models for the hole pairs, as illustrated in Fig. \[fig:ladder-coarse-graining\]. After prolongation back to the full lattice model the ground state wave function can be further improved by repeating the multigrid scheme can be performed. Now one can use knowledge of the approximate ground state to perform the restrictions of the basis, instead of using CORE. Details of this method and results of this approach will be published elsewhere.
We point out that MG-DMRG is a fundamentally different approach from the one taken by tree-tensor networks [@shi2006] or the multi-scale entanglement renormalization ansatz (MERA) [@vidal2007-1; @vidal2008]. In those approaches, a new class of wave functions is proposed that describes the system at several levels of coarse-graining and all levels are optimized simultaneously – which can still suffer from convergence problems at fine scales. Instead, our approach relies on standard matrix-product states which can be optimized and evaluated much more easily and much faster, but uses a hierarchical coarse graining to achieve a faster and more reliable optimization than standard DMRG.
Our algorithm can be easily combined with other optimization schemes for DMRG, such as using iTEBD [@vidal2003; @vidal2004] to directly simulate the thermodynamic limit. One can also easily generalize the restriction and prolongation to tensors of higher rank, in order to apply the multigrid scheme to other tensor network states, e.g. MERA [@vidal2007-1; @vidal2008], projected entangled pair states (PEPS) [@verstraete2004] and infinite PEPS (iPEPS) [@jordan2008].
This project was supported by a grant of ETH Zurich. The calculations were performed with the MAQUIS-DMRG code, developed with support of the Swiss High Performance and High Productivity Computing (HP2C) initiative, and based on the ALPS libraries [@bauer2011-alps]. ZR acknowledges the ANR Grant No. 09-BLAN-0097-01/2.
[^1]: While a recently developed approach allows the simulation of continuum systems without introducing a lattice [@verstraete2010], the standard DMRG method on a fine mesh is currently more robust and accurate [@dolfi2011].
[^2]: This requires a representation of the Hamiltonian on a non-uniform grid.
|
---
abstract: 'X-ray observations of the neutron star in the Cas A supernova remnant over the past decade suggest the star is undergoing a rapid drop in surface temperature of $\approx$ 2-5.5%. One explanation suggests the rapid cooling is triggered by the onset of neutron superfluidity in the core of the star, causing enhanced neutrino emission from neutron Cooper pair breaking and formation (PBF). Using consistent neutron star crust and core equations of state (EOSs) and compositions, we explore the sensitivity of this interpretation to the density dependence of the symmetry energy $L$ of the EOS used, and to the presence of enhanced neutrino cooling in the bubble phases of crustal “nuclear pasta”. Modeling cooling over a conservative range of neutron star masses and envelope compositions, we find $L\lesssim70$ MeV, competitive with terrestrial experimental constraints and other astrophysical observations. For masses near the most likely mass of $M\gtrsim 1.65 M_{\odot}$, the constraint becomes more restrictive $35\lesssim L\lesssim 55$ MeV. The inclusion of the bubble cooling processes decreases the cooling rate of the star during the PBF phase, matching the observed rate only when $L\lesssim45$ MeV, taking all masses into consideration, corresponding to neutron star radii $\lesssim 11$km.'
author:
- 'William G. Newton, Kyleah Murphy, Joshua Hooker and Bao-An Li'
title: The cooling of the Cassiopeia A neutron star as a probe of the nuclear symmetry energy and nuclear pasta
---
Introduction
============
In 2009, the thermal emission from the neutron star (NS) in the Cassiopeia A (Cas A) supernova remnant was fit using a carbon atmosphere model [@Ho2009] in order to obtain an emitting area consistent with canonical neutron star radii. The resulting average effective surface temperature was $\langle T_{\rm eff} \rangle \approx 2.1\times10$K. Subsequent analysis of *Chandra* data taken over the previous 10 years indicated a rapid decrease in $T_{\rm eff}$ by $\approx$4% [@Heinke2010]. A recent analysis of *Chandra* data from all X-ray detectors and modes concluded a more uncertain range of a 2-5.5% temperature decline, cautioning that a definitive measurement is difficult due to the surrounding bright and variable supernova remnant [@Elshamouty2013]. The most recent results from the ACIS-S detector (which gives the $\approx 4\%$ temperature decline between 2000 and 2009) are shown in Fig. 1 along with the best fit line, and two lines indicating best estimates for the shallowest ($\approx 2\%$) and steepest ($\approx 5.5\%$) declines. We take the age of Cas A NS (hereafter CANS) in 2005 to be $\tau_{\rm CANS} \approx 335$ yrs based on the estimated date of the supernova $\approx 1680 \pm 20$ yrs [@Fesen2006a].
Within the minimal cooling paradigm (MCP), which excludes all fast neutrino ($\nu$)-emission processes such as direct Urca (DU) but includes superfluid effects [@Page2004], the rapid cooling of the CANS is interpreted as the result of enhanced $\nu$-emission from neutron Cooper pair (CP) breaking and formation in the NS core (the “PBF” mechanism), providing evidence for stellar superfluidity [@Shternin2011; @Page2011; @Ho2013]. Other proposed models [@Blaschke2012; @Sedrakian2013] involve medium modification to standard $\nu$-emission processes such as modified Urca (MU) and nucleon Bremsstrahlung, or a phase transition to quark matter.
Neutrons in the NS core are expected to form CPs in the $^3P_2$ channel, while the protons form $^1S_0$ CPs. The pairing gaps and corresponding local critical temperatures $T_{\rm c}$ for the onset of superfluidity are strongly density dependent, and suffer significant theoretical uncertainty. The maximum value of the neutron $^3P_2$ critical temperature $T_{\rm cn}^{\rm max}$ determines the age of the NS when the PBF cooling phase is entered, $\tau_{\rm PBF}$, and can be tuned so that the PBF cooling trajectory passes through the observed temperature of the CANS at an age of $\approx 335$ years. The core temperature at the onset of the PBF phase, $T_{\rm PBF}$, controls the subsequent cooling rate; a higher $T_{\rm PBF}$ leads to a steeper cooling trajectory. Proton superconductivity in the core inhibits the MU cooling process, leading to a higher $T_{\rm PBF}$; the width and magnitude of the $^1S_0$ proton pairing gap profile can thus be tuned to alter the slope of the resulting cooling curve in the PBF phase. [@Shternin2011; @Page2011] find $T_{\rm cn}^{\rm max} \approx 5-9\times10^8$K and proton superconductivity throughout the whole core is required to fit the position and steepness of the observed cooling trajectory.
In the MCP, three other parameters affect the cooling trajectories of NSs [@Page2004]: the mass of light elements in the envelope of the star $\Delta M_{\rm light}$, here parameterized as $\eta = \log \Delta M_{\rm light}/M_{\odot}$ [@Yakovlev2011], the mass of the star $M$ and the equation of state (EOS) of nuclear matter (NM). The thermal spectrum from the CANS can be fit using light element masses $-13 < \eta < -8$ and a NS mass of $\approx 1.25 - 2 M_{\odot}$ with a most likely value of $\approx 1.65 M_{\odot}$ [@Yakovlev2011]. The presence of more light elements (larger $\eta$) in the envelope increases the thermal conductivity there, increasing the observed surface temperature for a given temperature below the envelope [@Yakovlev2011]. [@Shternin2011; @Page2011] used the APR EOS [@Akmal1998; @Heiselberg1999]; however, the NM EOS is still quite uncertain.
Nuclear matter models are characterized by their behavior around nuclear saturation density $n_0 = 0.16$ baryons fm$^{-3}$, around which much of our nuclear experimental information is extracted. Denote the energy per particle of nuclear matter by $E(n,\delta)$, where $\delta = 1-2x$ is the isospin asymmetry, and $x$ is the proton fraction. $\delta = 0$ corresponds to symmetric nuclear matter (SNM), and $\delta = 1$ to pure neutron matter (PNM). We define the *symmetry energy* $S(n)$ in the expansion about $\delta=0$: $E(n,\delta) = E_{\rm 0}(\chi) + S(n)\delta^2 + ...$ . $S(n)$ encodes the energy cost of decreasing the proton fraction of matter. Expanding $S(n)$ about $\chi=0$ where $\chi = \frac{n-n_{\rm 0}}{3n_{\rm 0}}$, we obtain $S(n) = J + L \chi + ...$ where $J$ and $L$ are the symmetry energy and its slope at $n_0$. $L$ determines the stiffness of the NS EOS around $n_0$ and correlates with NS radii [@Lattimer2001], crust thickness [@Ducoin2011] and the extent of so-called “nuclear pasta” phases in the inner crust [@Oyamatsu2007]. Terrestrial constraints on $L$ from measurements of nuclear neutron skins, electric dipole polarizability, collective motion and the dynamics of heavy ion collisions [@Li2008; @Tsang2012; @Newton2013a; @Lattimer2013; @Danielewicz2013] suggest $30 \lesssim L\lesssim 80$ MeV, although larger values are not ruled out. *Ab initio* calculations of PNM with well defined theoretical errors offer constraints on $J$ and $L$ (Fig. 2), and constraints on $S(n)$ from neutron star observations result in ranges of $L$ in broad agreement with experiment [@Ozel2010; @Steiner2010; @Steiner2012; @Steiner2013; @Gearheart2011; @Sotani2013]. In this letter we show that we can extract a conservative constraint $L\lesssim70$MeV within the MCP using the CANS data, and even more stringent constraints with reasonable assumptions about the mass of the star.
At the base of the neutron star crust, matter is frustrated and it becomes energetically favorable for the nuclei there to form cylindrical, slab or cylindrical/spherical bubble shapes - “nuclear pasta” [@Ravenhall1983; @Hashimoto1984]. Searching for observational signatures of the nuclear pasta phases is one quest of neutron star astrophysics [@Pons2013]. Two rapid $\nu$-emission processes have been postulated to operate in the bubble phases of nuclear pasta: neutrino-antineutrino pair emission [@Leinson1993] and DU [@Gusakov2004]. We refer to these two mechanisms collectively as bubble cooling processes (BCPs). The neutrino luminosity from the BCPs are comparable: $L_{\rm \nu}^{BCP} \sim 10^{40} T_9^6$ where $T_9 = T_{\rm core}/10^9$K. Compared with the MU neutrino luminosity $L_{\rm \nu}^{MU} \sim 10^{40} T_9^8$, the BCP becomes competitive with MU cooling at temperatures below $10^9$K - i.e. at ages of order the CANS. We thus expect the temperature to be lower at ages $\gtrsim 300$ yrs with BCPs active, and thus the PBF cooling trajectory shallower. In this letter we show that with BCPs active, calculated cooling trajectories are only marginally consistent with observations, and only if the EOS is particularly soft: $L\lesssim 45$ MeV.
Two caveats must be stated. The carbon atmosphere model is preferred *solely* on the grounds that the resultant emitting area is consistent with neutron star radii. Other atmosphere compositions are not ruled out, and would result in changes to the inferred $T_{\rm eff}$ by up to a factor of 2, changing the inferred ranges of $L$. Secondly, the $^1S_0$ neutron and proton pairing gaps are quite model-dependent and might be significantly enhanced in the bottom layers of the crust compared to the model we use here. This would significantly suppress the BCPs and weaken the latter constraints on $L$.
Model
=====
We calculate crust and core EOSs consistently using the Skyrme nuclear matter (NM) model. We choose the baseline Skyrme parameterization to be the SkIUFSU model [@Fattoyev2012a; @Fattoyev2012b], which shares the same saturation density nuclear matter properties as the relativistic mean field (RMF) IUFSU model [@Fattoyev2010], has isovector NM parameters obtained by fitting to *ab-initio* PNM calculations, and describes well the binding energies and charge radii of doubly magic nuclei [@Fattoyev2012a]. Two parameters in the Skyrme model can be adjusted to systematically vary the symmetry energy $J$ and its density slope $L$ at $n_0$ while leaving SNM properties unchanged [@Chen2009]. The constraints from PNM at low densities induce a correlation $J = 0.167 L + 23.33$ MeV. In this work we create EOSs characterized by $L=30-80$ MeV; the resulting PNM EOSs are shown for $L=30, 50, 70$ MeV in Fig. 2. These Skyrme NM models are then used to construct NS core EOSs (including compositions and nucleon effective masses) with the additional constraint that $M_{\rm max} > 2.0M_{\odot}$ [@Demorest2010; @Antoniadis2013], and consistent crust EOSs, compositions, and density ranges for the bubble phases of nuclear pasta using a liquid drop model [@Newton2013]. The resulting transition densities are very close to the ‘PNM’ sequence in Figs 6 and 15 of [@Newton2013]. For a star of fixed mass, as $L$ increases, the stellar radius and crust thickness increases (see, e.g., Fig. 2 of [@Hooker2013]) and the fraction of the crust by mass composed of the bubble phases decreases from $\sim 1/6$ at $L=30$ MeV to zero at $L\approx70$ MeV [@Newton2013].
We use the thermal envelope model [@Potekhin1997], neutron and proton $^{1}S_{0}$ gaps [@Chen1993] (model CCDK in [@Page2004]), neutron $^{3}P_{2}$ gap, and PBF model [@Yakovlev1999; @Kaminker1999] used in [@Page2011]. We use the publicly available code NSCOOL to perform the thermal evolution <http://www.astroscu.unam.mx/neutrones/NSCool/>. The neutrino emissivity for the BCPs is from @Leinson1993. We perform calculations at the limiting values of $\eta = -8$ and $\eta = -13$, masses of $M=1.25M_{\odot}, 1.4M_{\odot}$, $1.6M_{\odot}$ and $1.8M_{\odot}$ and for EOSs in the range $L=30 - 80$ MeV.
Results
=======
Fig. 3 illustrates the impact of $L$, $M$, $\eta$ and the inclusion of BCPs on fitting the *position* of the CANS data. Each plot shows cooling trajectories without and with the BCPs (solid and dashed lines respectively) and for the limiting $T_{\rm cn}^{\rm max}$ values of 0K (no $^{3}P_{2}$ neutron pairing) (upper trajectories) and $10^9$K (lower trajectories). We plot the inferred CANS effective surface temperature as seen by the observer $T_{\rm eff}^{\infty}$ - i.e. gravitationally redshifted from the surface temperature at the star $T_{\rm eff}^{\infty} = (1+z)^{-1} T_{\rm eff}$ where $z = (1 - 2GM/Rc^2)^{-1/2} - 1$, a factor which depends on $M$ and $L$ (the latter determining the radius $R$ for fixed $M$). Each pair of trajectories $T_{\rm cn}^{\rm max}$ = 0K, $10^9$K, forms a cooling window inside which the observed temperature must fall. BCPs narrow the cooling window from the higher temperature limit at ages $\sim \tau_{\rm CANS}$: $T_{\rm cn}^{\rm max}$ = 0K, the BCPs have a noticeable cooling effect which lowers $T_{\rm eff}^{\infty}$ while at $T_{\rm cn}^{\rm max} = 10^9$K, free neutrons in the bubble phases have already undergone the superfluid transition and thus the BCPs are suppressed; we thus see little effect for those trajectories. It is important to note that enhancement of $^1S_0$ neutron and proton gaps will also suppress BCPs, giving results closer to the “no BCP” cases presented here.
A higher $\Delta M_{\rm light}$ leads to higher $T_{\rm eff}^{\infty}$ for a given core temperature, as illustrated comparing $\eta =$ -8 and -13 in Figs 3a,b for $L=50$MeV, $M=1.25 M_{\odot}$; the cooling window is thus elevated relative to the observed $T_{\rm eff}^{\infty}$. As $M$ increases, the central stellar density increases and the fraction of the core in which the protons are superconducting decreases, making the MU process more efficient and the star cooler at $\tau_{\rm CANS}$. Decreasing $L$ decreases the radius, thus requiring a higher surface temperature to produce the same stellar luminosity. These trends are illustrated in Figs 3c-h.
If the measured CANS temperature falls within the theoretical cooling window for a given set of parameters, then one can find a value of $T_{\rm cn}^{\rm max}$ for which the cooling trajectory passes through the average measured temperature $\langle T_{\rm eff}^{\infty} \rangle$. Table I summarizes the ranges of $L$ for selected masses, $\eta$=-8 and -13 and with and without BCPs, for which the CANS data falls within the cooling window. Considering the full ranges of parameters, a constraint of $L\lesssim70$ MeV is extracted. Fitting of the thermal emission suggests that the mass is likely above $1.4M_{\odot}$, which gives a more restrictive constraint of $L\lesssim 60$ MeV. The ranges for $T_{\rm cn}^{\rm max}$ obtained with and without BCPs when all other parameters are varied are $5.1-5.7\times10^8$ and $5.6-9\times10^8$ respectively; the inclusion of BCPs leads to a more restrictive range.
Fig. 4 shows cooling windows for four sets of parameters ($L$(MeV), $M/M_{\odot}$, $\eta$) = (30, 1.4, -13) (Fig. 4a), (40, 1.6, -13) (Fig. 4b), (60, 1.4, -8) (Fig. 4c), (50,1.8,-13) (Fig. 4d), as well as the curves corresponding to the value of $T_{\rm cn}^{\rm max}$ that best fits $\langle T_{\rm eff}^{\infty} \rangle$. The limiting cooling rates given in Fig. 1 are indicated by the two straight lines intersecting at $\langle T_{\rm eff}^{\infty} \rangle$. Calculated trajectories should have slopes between these two lines as they pass through the average temperature. Even the 2% temperature decline is relatively rapid, favoring a relatively high core temperature at an age $\tau_{\rm PBF}$ and thus favoring smaller stars (smaller $L$), smaller masses $M$, a larger $\Delta M_{\rm light}$ (larger $\eta$), and disfavoring BCPs. Note in particular, with active BCPs the best fit cooling curve in the PBF phase is significantly less steep than without BCPs, and matches only the shallowest inferred cooling rate, and then only for the lowest values of $L$. As $L$ increases beyond $50-60$ MeV, depending on mass, the curves become too shallow to match the data even with no BCPs operating. The ranges of $L$ satisfying the *slope* range of the cooling curve inferred from observation *as well as* the average temperature are given in the second part of Table I.
Discussion and conclusions
==========================
Being agnostic about the mass of the neutron star in Cas A and the mass of the light element blanket within the ranges inferred from fitting the thermal spectrum under the assumption of a carbon atmosphere ($1.25 M_{\odot} < M < 1.8M_{\odot}$, $10^{-13} < \Delta M_{\rm light} < 10^{-8}$), theoretical cooling curves pass through the average inferred surface temperature if $L\lesssim 70$ MeV. For a mass $M=1.6 M_{\odot}$ (close to the most likely inferred mass of $1.65 M_{\odot}$), the range becomes slightly more restrictive $L\lesssim 65$ MeV.
Requiring the inferred cooling *rate* to be matched within its range of uncertainty, the constraint on $L$ tends to become more restrictive still. With BCPs inactive, $L\lesssim 70$ MeV ($35\lesssim L\lesssim 55$ MeV) for $1.25 M_{\odot} < M < 1.8M_{\odot}$ ($M=1.6 M_{\odot}$). With BCPs active, cooling curves become shallower and we obtain our most restrictive constraints $L\lesssim 45$ MeV ($35\lesssim L\lesssim 45$ MeV) for $1.25 M_{\odot} < M < 1.8M_{\odot}$ ($M=1.6M_{\odot}$). The latter constraints correspond to neutron star radii $\lesssim11$km for the EOSs used here.
Accepting the MCP cooling model and the accuracy of X-ray measurements and interpretation, we can conclude either: (i) efficient cooling mechanisms are active in the bubble phases of nuclear pasta and $L\lesssim45$ MeV, or (ii) efficient cooling in nuclear pasta is suppressed, and $L\lesssim 70$ MeV. Such suppression could occur if the high density tail of the neutron $^1S_0$ pairing gap profile or the low density tails of the neutron $^3P_2$ or proton $^1S_0$ pairing gap profiles enhanced superfluidity in the bubble phases. Additionally, there might be other unexplored medium effects that inhibit the BCPs such as entrainment of crustal neutrons [@Chamel2012]. However, even at their most conservative, these constraints are competitive with experimental constraints $L\approx$ 30-80 MeV.
Together, the physics of many aspects of a neutron star surface and interior affect its temperature evolution; in this letter we have systematically examined the effect of two such aspects, namely the slope of the symmetry energy $L$ and the presence of enhanced cooling in the bubble phases while controlling for the behavior of other physical aspects. We must caution that we have not accounted for every possible parameter and variation thereof. We cannot rule out atmosphere models other than the carbon composition model upon which the current $\langle T_{\rm eff}^{\infty} \rangle$ is based; use of other models would shift the inferred range of $L$. Broadening the range of the $^1S_0$ proton pairing gap would inhibit MU cooling even more: this would raise the temperature at the onset of the PBF phase, steepening the cooling curve. Additional variations in the high density EOS could also shift the inferred range of $L$. As an example, [@Shternin2011] find 1.8$M_{\odot}$ stellar models that match the CANS cooling rate, whereas we do not. The APR EOS used there gives a maximum mass $M_{\rm max} \approx 1.9M_{\odot}$, below the observed lower limit, so their high mass models will tend to be more compact and allow steeper cooling trajectories. Stiffening the high density EOS to increase $M_{\rm max}$ above $\approx 2M_{\odot}$ will tend to decrease the cooling rate. Additionally, the crust model is not consistent with their core EOS, and although their gap profiles reach similar magnitudes as those used here, the gap profiles are different.
Despite these limitations, we have demonstrated that current cooling observations of the Cas A NS have the potential to impose strong constraints on the slope of the symmetry energy $L$ at saturation density and demonstrated for the first time that enhanced cooling in the bubble phases of nuclear pasta can have an observable effect. Continued monitoring of the Cas A NS temperature over the upcoming decade could place some stringent constraints on that physics.
In the preparation of this manuscript the authors became aware of the preliminary results of a similar study (<http://www.nucl.phys.tohoku.ac.jp/nusym13/proc/nusym13_Yeunhwan_Lim.pdf>) constraining the symmetry energy using Cas A temperature measurements, which are in broad agreement with our own (without the use of cooling mechanisms in the bubble phases of pasta).
Acknowledgments
===============
We thank Dany Page for help running NSCool, and Farrukh Fattoyev for helpful discussions. This work is supported in part by the National Aeronautics and Space Administration under grant NNX11AC41G issued through the Science Mission Directorate, the National Science Foundation under Grants No. PHY-0757839, No. PHY-1068022, US Department of Energy Grants DE-FG02-08ER41533, desc0004971 and the REU program under grant no. PHY-1062613.
natexlab\#1[\#1]{}
, A., [Pandharipande]{}, V. R., & [Ravenhall]{}, D. G. 1998, , 58, 1804
, J., [Freire]{}, P. C. C., [Wex]{}, N., et al, 2013, Science, 340, 6131, 448
, D., [Grigorian]{}, H., [Voskresensky]{}, D. N., & [Weber]{}, F. 2012, , 85, 022802
, N. 2012, , 85, 035801
, J. M. C., [Clark]{}, J. W., [Dav[é]{}]{}, R. D., & [Khodel]{}, V. V. 1993, Nuclear Physics A, 555, 59
, L.-W., [Cai]{}, B.-J., [Ko]{}, C. M., [et al.]{} 2009, , 80, 014322
, P., & [Lee]{}, J. 2013, ArXiv e-prints, arXiv:1307.4130
, P.B., [Pennucci]{}, T., [Ransom]{}, S.M., [Roberts]{}, M.S.E. & [Hessels]{}, J.W.T. 2010, Nature, 467, 1081
, C., [Margueron]{}, J., [Provid[ê]{}ncia]{}, C., & [Vida[ñ]{}a]{}, I. 2011, , 83, 045810
, K. G., [Heinke]{}, C. O., [Sivakoff]{}, G. R., [et al.]{} 2013, , 777, 22
, F. J., [Carvajal]{}, J., [Newton]{}, W. G., & [Li]{}, B.-A. 2013, , 87, 015806
, F. J., [Horowitz]{}, C. J., [Piekarewicz]{}, J., & [Shen]{}, G. 2010, , 82, 055803
, F. J., [Newton]{}, W. G., [Xu]{}, J., & [Li]{}, B.-A. 2012, , 86, 025804
, R. A. *et al* 2006, , 645, 283
, S., [Illarionov]{}, A. Y., [Fantoni]{}, S., [et al.]{} 2010, , 404, L35
, S., [Illarionov]{}, A. Y., [Schmidt]{}, K. E., [Pederiva]{}, F., & [Fantoni]{}, S. 2009, , 79, 054005
, M., [Newton]{}, W. G., [Hooker]{}, J., & [Li]{}, B.-A. 2011, , 418, 2343
, A., [Tews]{}, I., [Epelbaum]{}, E., [et al.]{} 2013, Physical Review Letters, 111, 032501
, M. E., [Yakovlev]{}, D. G., [Haensel]{}, P., & [Gnedin]{}, O. Y. 2004, , 421, 1143
, M., [Seki]{}, H., & [Yamada]{}, M. 1984, Progress of Theoretical Physics, 71, 320
, K., & [Schwenk]{}, A. 2010, , 82, 014314
, C. O., & [Ho]{}, W. C. G. 2010, , 719, L167
, H., & [Hjorth-Jensen]{}, M. 1999, , 525, L45
, W. C. G., [Andersson]{}, N., [Espinoza]{}, C. M., [et al.]{} 2013, Proceedings of Xth Quark Confinement and the Hadron Spectrum, M. Berwein, N. Brambilla, S. Paul (eds.); PoS (Confinement X) 260; arXiv:1303.3282
, W. C. G., & [Heinke]{}, C. O. 2009, , 462, 71
, J., [Newton]{}, W. G., & [Li]{}, B.-A. 2013, ArXiv e-prints, arXiv:1308.0031
, A. D., [Haensel]{}, P., & [Yakovlev]{}, D. G. 1999, , 345, L14
, J. M., & [Lim]{}, Y. 2013, , 771, 51
, J. M., & [Prakash]{}, M. 2001, , 550, 426
, L. B. 1993, , 415, 759
, B.-A., [Chen]{}, L.-W., & [Ko]{}, C. M. 2008, , 464, 113
, W. G., [Gearheart]{}, M., & [Li]{}, B.-A. 2013, , 204, 9
, W. G., [Gearheart]{}, M., [Wen]{}, D.-H., & [Li]{}, B.-A. 2013, Journal of Physics Conference Series, 420, 012145
, K., & [Iida]{}, K. 2007, , 75, 015801
, F., [Baym]{}, G., & [G[ü]{}ver]{}, T. 2010, , 82, 101301
, D., [Lattimer]{}, J. M., [Prakash]{}, M., & [Steiner]{}, A. W. 2004, , 155, 623
, D., [Prakash]{}, M., [Lattimer]{}, J. M., & [Steiner]{}, A. W. 2011, Physical Review Letters, 106, 081101
Pons, J. A., Vigano’, D., & Rea, N. 2013, Nature Physics, 9,, 431-434, arXiv:1304.6546
, A. Y., [Chabrier]{}, G., & [Yakovlev]{}, D. G. 1997, A&A, 323, 415
, D. G., [Pethick]{}, C. J., & [Wilson]{}, J. R. 1983, Physical Review Letters, 50, 2066
, A., & [Pethick]{}, C. J. 2005, Physical Review Letters, 95, 160401
, A. 2013, , 555, L10
, P. S., [Yakovlev]{}, D. G., [Heinke]{}, C. O., [Ho]{}, W. C. G., & [Patnaude]{}, D. J. 2011, , 412, L108
Sotani, H. et al 2013, , 428, L21
, A. W., & [Gandolfi]{}, S. 2012, Physical Review Letters, 108, 081102
, A. W., [Lattimer]{}, J. M., & [Brown]{}, E. F. 2010, , 722, 33
—. 2013, , 765, L5
, M. B., [Stone]{}, J. R., [Camera]{}, F., [et al.]{} 2012, , 86, 015803
, D. G., [Ho]{}, W. C. G., [Shternin]{}, P. S., [Heinke]{}, C. O., & [Potekhin]{}, A. Y. 2011, , 411, 1977
, D. G., [Kaminker]{}, A. D., & [Levenfish]{}, K. P. 1999, , 343, 650
![(Color online). Energy per neutron versus neutron baryon density for pure neutron matter obtained from calculations of Fermi gases in the unitary limit [@Schwenk2005] (SP), chiral effective field theory [@Hebeler2010] (HS), quantum Monte Carlo calculations using chiral forces at leading order [@Gezerlis2013] (LO), Auxiliary Field Diffusion Monte Carlo using realistic two-nucleon interactions plus phenomenological three-nucleon interactions AV8+UIX [@Gandolfi2009; @Gandolfi2010], and the APR EOS [@Akmal1998]. Results using the Skyrme model SkIUFSU used in this paper are shown for $L$=30, 50 and 70 MeV[]{data-label="Fig2"}](f2.eps)
[ccccc]{} $M(M_{\odot}$) & $\eta$=-8; BCP & $\eta$=-13; BCP & $\eta$=-8; no BCP & $\eta$=-13; no BCP\
1.25 & $\lesssim$ 70 & - & $\lesssim$ 70 & $\lesssim$ 55\
1.40 & $\approx$ 35-65 & $\lesssim$ 45 & $\lesssim$ 65 & $\lesssim$ 55\
1.60 & $\approx$ 55-65 & $\lesssim$ 55 & $\lesssim$ 55-65 & $\lesssim$ 65\
1.80 & - & $\approx$ 45-65 & - & $\approx$ 45-65\
1.25 & $\lesssim$ 45 & - & $\lesssim$ 70 & $\lesssim$ 55\
1.40 & - & $\lesssim$ 35 & $\lesssim$ 55 & $\lesssim$ 55\
1.60 & - & $\approx$ 35-45 & - & $\approx$ 35-55\
1.80 & - & - & - & -\
|
---
abstract: 'We give a path integral formulation of the time evolution of qudits of odd dimension. This allows us to consider semiclassical evolution of discrete systems in terms of an expansion of the propagator in powers of $\hbar$. The largest power of $\hbar$ required to describe the evolution is a traditional measure of classicality. We show that the action of the Clifford operators on stabilizer states can be fully described by a single contribution of a path integral truncated at order $\hbar^0$ and so are “classical,” just like propagation of Gaussians under harmonic Hamiltonians in the continuous case. Such operations have no dependence on phase or quantum interference. Conversely, we show that supplementing the Clifford group with gates necessary for universal quantum computation results in a propagator consisting of a finite number of semiclassical path integral contributions truncated at order $\hbar^1$, a number that nevertheless scales exponentially with the number of qudits. The same sum in continuous systems has an infinite number of terms at order $\hbar^1$.'
author:
- Lucas Kocia
- Yifei Huang
- Peter Love
bibliography:
- 'biblio.bib'
title: 'Semiclassical Formulation of Gottesman-Knill and Universal Quantum Computation'
---
Introduction {#sec:intro}
============
The study of contextuality in quantum information has led to progress in our understanding of the Wigner function for discrete systems. Using Wootters’ original derivation of discrete Wigner functions [@Wootters87], Eisert [@Mari12], Gross [@Gross06], and Emerson [@Howard14] have pushed forward a new perspective on the quantum analysis of states and operators in finite Hilbert spaces by considering their quasiprobability representation on discrete phase space. Most notably, the positivity of such representations has been shown to be equivalent to non-contextuality, a notion of classicality [@Howard14; @Spekkens08]. Quantum gates and states that exhibit these features are the stabilizer states and Clifford operations used in quantum error correction and stabilizer codes. The non-contextuality of stabilizer states and Clifford operations explains why they are amenable to efficient classical simulation [@Gottesman98; @Aaronson04].
This progress raises the question of how these discrete techniques are connected to prior established methods for simulating quantum mechanics in phase space. A particularly relevant method is trajectory-based semiclassical propagation, which has been widely used in the continuous context. Perhaps, when applied to the discrete case, semiclassical propagators can lend their physical intuition to outstanding problems in quantum information. Conversely, concepts from quantum information may serve to illuminate the comparatively older field of continuous semiclassics.
Quantum information attempts to classify the “quantumness” of a system by the presence or absence of various quantum resources. Semiclassical analysis proceeds by successive approximation using $\hbar$ as a small parameter, where the power of $\hbar$ required is a measure of “quantumness”. Can these two views of quantum vs. classical be related? In the current paper, we build a bridge from the continuous semiclassical world to the discrete world and examine the classical-quantum characteristics of discrete quantum gates found in circuit models and their stabilizer formalism.
Stabilizer states are eigenvalue one eigenvectors of a commuting set of operators making up a group which does not contain $-\mathbb I$. The set of stabilizer states is preserved by elements of the Clifford group, which is the normalizer of the Pauli group, and can be simulated very efficiently. More precisely, by the Gottesman-Knill Theorem, for $n$ qubits, a quantum circuit of a Clifford gate can be simulated using $\mathcal{O}(n)$ operations on a classical computer. Measurements require $\mathcal{O}(n^2)$ operations with $\mathcal{O}(n^2)$ bits of storage [@Gottesman98; @Aaronson04].
The reason that stabilizer evolution by Clifford gates can be efficiently simulated classically has been explained in various ways. For instance, as already mentioned, stabilizer states have been shown to be non-contextual in qudit [@Howard14] and rebit [@Delfosse15] systems. The obstacle to proving this for qubits is that qubit systems possess state-independent contextuality [@Mermin93]. Of course, we know how to simulate qubit stabilizer states and Clifford operations efficiently by the Gottesmann-Knill theorem [@Aaronson04]. For recent progress relating non-contextuality to classical simulatability for qubits we refer the reader to [@Raussendorf15]. It has also been shown for dimensions greater than two that a state of a discrete system is a stabilizer state if and only if its appropriately defined discrete Wigner function is non-negative [@Gross06]. Therefore, when acted on by positive-definite operators, it can be considered as a proper positive-definite (classical) distribution.
Here, we instead relate the concept of efficient classical simulation to the power of $\hbar$ that a path integral treatment must be expanded to in order to describe the quantum evolution of interest. It is well known that Gaussian propagation in continuous systems under harmonic Hamiltonians can be described with a single contribution from the path integral truncated at order $\hbar^0$ [@Heller75]. We show that the corresponding case in discrete systems exists. In the discrete case, stabilizer states take the place of Gaussians and harmonic Hamiltonians that additionally preserve the discrete phase space take the place of the general continuous harmonic Hamiltonians. In the discrete case we will only consider $d$-dimensional systems for odd $d$ since their center representation (or Weyl formalism) is far simpler.
As a consequence, we will show that operations with Clifford gates on stabilizer states can be treated by a path integral independent of the magnitude of $\hbar$ and are thus fundamentally classical. Such operations have no dependence on phase or quantum interference. This can be viewed as a restatement of the Gottesman-Knill theorem in terms of powers of $\hbar$.
We also consider more general propagation for discrete quantum systems. Quantum propagation in continuous systems can be treated by a sum consisting of an infinite number of contributions from the path integral truncated at order $\hbar^1$. In discrete systems, we show that the corresponding sum consists of a finite number of terms, albeit one that scales exponentially with the number of qudits.
This work also answers a question posed by the recent work of Penney *et al*. that explored a “sum-over-paths” expression for Clifford circuits in terms of symplectomorphisms on phase-space and associated generating actions. Penney *et al*. raised the question of how to relate the dynamics of the Wigner representation of (stabilizer) states to the dynamics which are the solutions of the discrete Euler-Lagrange equations for an associated functional [@Penney16]. By relying on the well-established center-chord (or Wigner-Weyl-Moyal) formalism in continuous [@Almeida98] and discrete systems [@Rivas99], we show how the dynamics of Wigner representations are governed by such solutions related to a “center generating” function and that these solutions are harmonic and classical in nature.
We begin by giving an overview of the center-chord representation in continuous systems in Section \[sec:contcenterchord\]. Then, Section \[sec:contpathintegral\] introduces the expansion of the path integral in powers of $\hbar$. This leads us to show what “classical” simulability of states in the continuous case corresponds to and to what higher order of $\hbar$ an expansion is necessary to treat any quantum operator. Section \[sec:discrete\] then introduces the discrete variable case and defines its corresponding conjugate position and momentum operators. The path integral in discrete systems is then introduced in Section \[sec:discretesemi\] and, in Section \[sec:stabilizergroup\], we define the Clifford group and stabilizer states. We prove that stabilizer state propagation within the Clifford group is captured fully up to order $\hbar^0$ and so is efficiently simulable classically. Section \[sec:discreteuniversalcomp\] shows that extending the Clifford group to a universal gate set necessitates an expansion of the semiclassical propagator to a finite sum at order $\hbar^1$. Finally, we close the paper with some discussion and directions for future work in Section \[sec:conc\].
Center-Chord Representation in Continuous Systems {#sec:contcenterchord}
=================================================
We define position operators $\hat q$, $\hat q {\left|q'\right\rangle} = q' {\left|q'\right\rangle}$, and momentum operators $\hat p$ as their Fourier transform, $\hat p = \hat{\mathcal F}^\dagger \hat q \hat{\mathcal F}$, where $$\hat F = h^{\frac{n}{2}} \int^\infty_{-\infty} \mbox d {\boldsymbol}p \int^\infty_{-\infty} \mbox d {\boldsymbol}q \exp \left(\frac{2 \pi i}{\hbar} {\boldsymbol}p \cdot {\boldsymbol}q \right) {{\left|{\boldsymbol}p\right\rangle}{\left\langle {\boldsymbol}q\right|}}.
\label{eq:fourier}$$
Since $[\hat q, \hat p] = i \hbar$, these operators produce a particularly simple Lie algebra and are the generators of a Lie group. In this Lie group we can define the “boost” operator: $$\hat Z^{\delta p} {\left|q'\right\rangle} = e^{\frac{i}{\hbar} \hat q \delta p} {\left|q'\right\rangle} = e^{\frac{i}{\hbar} q' \delta p} {\left|q'\right\rangle},$$ and the “shift” operator: $$\hat X^{\delta q} {\left|q'\right\rangle} = e^{-\frac{i}{\hbar} \hat p \delta q} {\left|q'\right\rangle} = {\left|q' + \delta q\right\rangle}.$$
Using the canonical commutation relation and $e^{\hat A + \hat B} = e^{\hat A} e^{\hat B} e^{-\frac{1}{2}[\hat A, \hat B]}$ if $[\hat A, \hat B]$ is a constant, it follows that $$\label{eq:contWeylrelation}
\hat Z \hat X = e^{\frac{i}{\hbar}} \hat X \hat Z.$$ This is known as the Weyl relation and shows that the product of a shift and a boost (a generalized translation) in phase space is only unique up to a phase governed by $\hbar$.
We proceed to introduce the chord representation of operators and states [@Almeida98]. The generalized phase space translation operator (often called the Weyl operator) is defined as a product of the shift and boost: $$\hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) = e^{-\frac{i}{2 \hbar} {\boldsymbol}\xi_p \cdot {\boldsymbol}\xi_q} \hat Z^{ {\boldsymbol}\xi_p} \hat X^{ {\boldsymbol}\xi_q},$$ where ${\boldsymbol}\xi \equiv ({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) \in \mathbb{R}^{2n}$ define the chord phase space. $\hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q)$ is a translation by the chord ${\boldsymbol}\xi$ in phase space. This can be seen by examining its effect on position and momentum states: $$\hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) {\left|{\boldsymbol}q\right\rangle} = e^{\frac{i}{\hbar} \left({\boldsymbol}q + \frac{{\boldsymbol}\xi_q}{2}\right) \cdot {\boldsymbol}\xi_p} {\left|{\boldsymbol}q+{\boldsymbol}\xi_q\right\rangle},$$ and $$\hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) {\left|{\boldsymbol}p\right\rangle} = e^{-\frac{i}{\hbar} \left({\boldsymbol}p + \frac{{\boldsymbol}\xi_p}{2}\right) \cdot {\boldsymbol}\xi_q} {\left|{\boldsymbol}p+{\boldsymbol}\xi_p\right\rangle},$$ which are shown in Fig. \[fig:translations\]. Changing the order of shifts $\hat X$ and boosts $\hat Z$ changes the phase of the translation in phase space by ${\boldsymbol}\xi$, as given by the Weyl relation above (Eq. \[eq:contWeylrelation\]).
An operator $\hat A$ can be expressed as a linear combination of these translations: $$\hat A = \int^\infty_{-\infty} \mbox{d} {\boldsymbol}\xi_p \int^\infty_{-\infty} \mbox{d} {\boldsymbol}\xi_q \, A_\xi({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) \hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q),$$ where the weights are: $$A_\xi({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) = \operatorname{Tr}\left( \hat T({\boldsymbol}\xi_p, {\boldsymbol}\xi_q)^\dagger \hat A \right).$$ These weights give the chord representation of $\hat A$.
![Translation of a) a position state and b) a momentum state along the chord $(\xi_p, \xi_q)$ in phase space.[]{data-label="fig:translations"}](translations.pdf)
The Weyl function, or center representation, is dual to the chord representation. It is defined in terms of reflections instead of translations. We can define the reflection operator $\hat R$ as the symplectic Fourier transform of the translation operator: $$\begin{aligned}
\label{eq:contreflection}
&&\hat R({\boldsymbol}x_p, {\boldsymbol}x_q) = \left(2 \pi \hbar\right)^{-n} \int^\infty_{-\infty} \mbox{d} {\boldsymbol}\xi e^{\frac{i}{\hbar} {{\boldsymbol}\xi}^T {\boldsymbol}{\mathcal J} {{\boldsymbol}x} } \hat T({\boldsymbol}\xi)\end{aligned}$$ where ${\boldsymbol}x \equiv \left({\boldsymbol}x_p, {\boldsymbol}x_q\right) \in \mathbb{R}^{2n}$ are a continuous set of Weyl phase space points or centers and ${\boldsymbol}{\mathcal J}$ is the symplectic matrix $${\boldsymbol}{\mathcal J} = \left( \begin{array}{cc} 0 & -\mathbb{I}_{n}\\ \mathbb{I}_{n} & 0 \end{array}\right),$$ for $\mathbb{I}_n$ the $n$-dimensional identity. The association of this operator with reflection can be seen by examining its effect on position and momentum states: $$\label{eq:contreflectpos}
\hat R({\boldsymbol}x_p, {\boldsymbol}x_q) {\left|{\boldsymbol}q\right\rangle} = e^{\frac{i}{\hbar} 2 ({\boldsymbol}x_q - {\boldsymbol}q) \cdot {\boldsymbol}x_p} {\left|2{\boldsymbol}x_q-{\boldsymbol}q\right\rangle},$$ and $$\label{eq:contreflectmom}
\hat R({\boldsymbol}x_p, {\boldsymbol}x_q) {\left|{\boldsymbol}p\right\rangle} = e^{-\frac{i}{\hbar} 2 ({\boldsymbol}x_p - {\boldsymbol}p) \cdot {\boldsymbol}x_q} {\left|2{\boldsymbol}x_p-{\boldsymbol}p\right\rangle},$$ which are sketched in Fig. \[fig:reflections\]. It is thus evident that $\hat R({\boldsymbol}x_p, {\boldsymbol}x_q)$ reflects the phase space around ${\boldsymbol}x$. Note that while we refer to reflections in the symplectic sense here, and in the rest of the paper; Eqs. \[eq:contreflectpos\] and \[eq:contreflectmom\] show that they are in fact “an inversion around ${\boldsymbol}x$” in every two-plane of conjugate ${x_p}_i$ and ${x_q}_i$. However, we will keep to the established nomenclature [@Almeida98].
![Reflection of a) a position state and b) a momentum state across the center $(x_p, x_q)$ in phase space.[]{data-label="fig:reflections"}](reflections.pdf)
An operator $\hat A$ can now be expressed as a linear combination of reflections: $$\hat A = \left(2 \pi \hbar\right)^{-n} \int^\infty_{-\infty} \mbox{d} {\boldsymbol}x_p \int^\infty_{-\infty} \mbox{d} {\boldsymbol}x_q \, A_x({\boldsymbol}x_p, {\boldsymbol}x_q) \hat R( {\boldsymbol}x_p, {\boldsymbol}x_q ),
\label{eq:contsupofreflections}$$ where $$A_x({\boldsymbol}x_p, {\boldsymbol}x_q) = \operatorname{Tr}\left( \hat R({\boldsymbol}x_p, {\boldsymbol}x_q)^\dagger \hat A \right),
\label{eq:contweylfunction}$$ and is called the center representation of $\hat A$.
This representation is of particular interest to us because we can rewrite the components $A_x$ for unitary transformations $\hat A$ as: $$A_x({\boldsymbol}x_p, {\boldsymbol}x_q) = e^{\frac{i}{\hbar} S({\boldsymbol}x_p, {\boldsymbol}x_q)},$$ where $S({\boldsymbol}x_p, {\boldsymbol}x_q)$ is equivalent to the action the transformation $A_x$ produces in Weyl phase space in terms of reflections around centers ${\boldsymbol}x$ [@Almeida98]. Thus, $S$ is also called the “center generating” function.
For a pure state ${\left|\Psi\right\rangle}$, the Wigner function given by Eq. \[eq:contweylfunction\] simplifies to: $$\begin{aligned}
&& {\Psi}_x({\boldsymbol}x_p, {\boldsymbol}x_q) = \left(2 \pi \hbar\right)^{-n}\\
&& \int^\infty_{-\infty} \mbox{d} {\boldsymbol}\xi_q \, \Psi \left( {\boldsymbol}x_q + \frac{{\boldsymbol}\xi_q }{2} \right) {\Psi^*} \left( {\boldsymbol}x_q - \frac{{\boldsymbol}\xi_q}{2} \right) e^{-\frac{i}{\hbar} {\boldsymbol}\xi_q \cdot {\boldsymbol}x_p}. \nonumber\end{aligned}$$
The center representation for quantum states immediately yields the well-known Wigner function for continuous systems. The chord representation is the symplectic Fourier transform of the Wigner function. The center and chord representations are dual to each other, and are the Wigner and Wigner characteristic functions res\[ectively. Identifying the Wigner functions with the center representation, and the center representation as dual to the chord representation motivates the development of both center (Wigner) and chord representations for discrete systems in Section \[sec:discrete\].
Path Integral Propagation in continuous Systems {#sec:contpathintegral}
===============================================
Propagation from one quantum state to another can be expressed in terms of the path integral formalism of the quantum propagator. For one degree of freedom, with an initial position $q$ and final position $q'$, evolving under the Hamiltonian $H$ for time $t$, the propagator is $${\left\langle q\middle|e^{-i H t/\hbar}\middle|q'\right\rangle} = \int \mathcal{D}[q_t] \, \exp \left(\frac{i}{\hbar} G[q_t]\right)
\label{eq:feynmanprop}$$ where $G[q_t]$ is the action of the trajectory $q_t$, which starts at $q$ and ends at $q'$ a time $t$ later [@Feynman12; @Schulman12].
Eq. \[eq:feynmanprop\] can be reexpressed as a variational expansion around the set of classical trajectories (a set of measure zero) that start at $q$ and end at $q'$ a time $t$ later. This is an expansion in powers of $\hbar$: $$\begin{aligned}
\label{eq:semiclassprop}
&& {\left\langle q'\middle|e^{-\frac{i}{\hbar}Ht}\middle|q\right\rangle} =\\
&& \sum_j^\text{cl. paths} \int \mathcal{D}[q_{tj}] \, e^{\frac{i}{\hbar} \left( G[q_{tj}] + \delta G[q_{tj}] + \frac{1}{2} \delta^2 G[q_{tj}] + \ldots \right) }, \nonumber\end{aligned}$$ where $\delta G[q_{tj}]$ denotes a functional variation of the paths $q_{tj}$ and for classical paths $\delta G[q_{tj}] = 0$ (For further details we refer the reader to Section 10.3 of [@Tannor07]).
Terminating Eq. \[eq:semiclassprop\] to first order in $\hbar$ produces the position state representation of the van Vleck-Morette-Gutzwiller propagator [@Van28; @Morette51; @Gutzwiller67]: $$\begin{aligned}
\label{eq:vVMG}
&& {\left\langle q'\middle|e^{-\frac{i}{\hbar}Ht}\middle|q\right\rangle} = \\
&& \sum_j\left( \frac{- \frac{\partial^2 G_{jt}(q,q')}{\partial q \partial q'}}{2 \pi i \hbar} \right)^{1/2} e^{i \frac{G_{jt}(q,q')}{\hbar}} + \mathcal{O}(\hbar^2).\nonumber\end{aligned}$$ where the sum is over all classical paths that satisfy the boundary conditions.
In the center representation, for $n$ degrees of freedom, the semiclassical propagator $U_t({\boldsymbol}x_p, {\boldsymbol}x_q)$ becomes [@Almeida98]: $$\begin{aligned}
&&U_t({\boldsymbol}x_p, {\boldsymbol}x_q) = \\
&& \sum_j \left\{ \det \left[ 1 + \frac{1}{2} {\boldsymbol}{\mathcal J} \frac{\partial^2 S_{tj}}{\partial {\boldsymbol}x^2} \right] \right\}^{\frac{1}{2}} e^{\frac{i}{\hbar} S_{tj}({\boldsymbol}x_p, {\boldsymbol}x_q)} + \mathcal{O}(\hbar^2),\nonumber
\label{eq:contcenterrepvVMG}\end{aligned}$$ where $S_{tj}({\boldsymbol}x_p, {\boldsymbol}x_q)$ is the center generating function (or action) for the center ${\boldsymbol}x = ({\boldsymbol}x_p, {\boldsymbol}x_q)\equiv\frac{1}{2}\left[({\boldsymbol}p, {\boldsymbol}q)+({\boldsymbol}p', {\boldsymbol}q')\right]$.
In general this is an underdetermined system of equations and there are an infinite number of classical trajectories that satisfy these conditions. The accuracy of adding them up as part of this semiclassical approximation is determined by how separated these trajectories are with respect to $\hbar$—the saddle-point condition for convergence of the method of steepest descents. However, some Hamiltonians exhibit a single saddle point contribution and are thus exact at order $\hbar^1$ [^1]:
There is only one classical trajectory $({\boldsymbol}p, {\boldsymbol}q)\underset{t}{\rightarrow} ({\boldsymbol}p', {\boldsymbol}q')$ that satisfies the boundary conditions $({\boldsymbol}x_p, {\boldsymbol}x_q) = \frac{1}{2}\left[({\boldsymbol}p, {\boldsymbol}q)+({\boldsymbol}p', {\boldsymbol}q')\right]$ and $t$ under Hamiltonians that are harmonic in ${\boldsymbol}p$ and ${\boldsymbol}q$.
For a quadratic Hamiltonian, the diagonalized equations of motion for $n$-dimensional $({\boldsymbol}p', {\boldsymbol}q')$ are of the form: $$\begin{aligned}
p'_i &=& \alpha(t)_i p_i + \beta(t)_i q_i + \gamma(t)_i,\\
q'_i &=& \delta(t)_i p_i + \epsilon(t)_i q_i + \eta(t)_i,
\end{aligned}$$ for $i \in \{1, 2, \ldots, n\}$. Since $t$ is known and $({\boldsymbol}p, {\boldsymbol}q)$ can be written in terms of $({\boldsymbol}p', {\boldsymbol}q')$ by using $({\boldsymbol}x_p, {\boldsymbol}x_q)$, this brings the total number of linear equations to $2n$ with $2n$ unknowns and so there exists one unique solution.[$\blacksquare$]{}
Since the equations of motion for a harmonic Hamiltonian are linear, we can write their solutions as: $$\left( \begin{array}{c}{\boldsymbol}p'\\ {\boldsymbol}q'\end{array} \right) = {\boldsymbol}{\mathcal M}_t \left[ \left( \begin{array}{c}{\boldsymbol}p\\ {\boldsymbol}q \end{array}\right) + \frac{1}{2} {\boldsymbol}\alpha_t \right] + \frac{1}{2} {\boldsymbol}\alpha_t,
\label{eq:quadmap}$$ where $\mathcal {{\boldsymbol}\alpha}_t$ is an $n$-vector and ${\boldsymbol}{\mathcal M}_t$ is an $n\times n$ symplectic matrix, both with entries in $\mathbb{R}$. In this case, the center generating function $S_t({\boldsymbol}x_p, {\boldsymbol}x_q)$ is also quadratic, in particular $$\label{eq:quadcentgenfunction}
S_t({\boldsymbol}x_p, {\boldsymbol}x_q) = {\boldsymbol}\alpha_t^T {\boldsymbol}{\mathcal J} \left(\begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array}\right) + ({\boldsymbol}x_p, {\boldsymbol}x_q) {\boldsymbol}{\mathcal B}_t \left(\begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array}\right),$$ where ${\boldsymbol}{\mathcal B}_t$ is a real symmetric $n\times n$ matrix that is related to ${\boldsymbol}{\mathcal M}_t$ by the Cayley parameterization of ${\boldsymbol}{\mathcal M}_t$ [@Golub12]: $$\label{eq:cayleyparam}
{\boldsymbol}{\mathcal J} {\boldsymbol}{\mathcal B}_t = \left( 1 + {\boldsymbol}{\mathcal M}_t \right)^{-1} \left( 1 - {\boldsymbol}{\mathcal M}_t \right) = \left( 1 - {\boldsymbol}{\mathcal M}_t \right) \left( 1 + {\boldsymbol}{\mathcal M}_t \right)^{-1}.$$
Since one classical trajectory contribution is sufficient in this case, if the overall phase of the propagated state is not important, then the expansion w.r.t. $\hbar$ in Eq. \[eq:semiclassprop\] can be truncated at order $\hbar^0$. Dropping terms that are higher order than $\hbar^0$ and ignoring phase is equivalent to propagating the classical density $\rho({\boldsymbol}x)$ corresponding to the $({\boldsymbol}p, {\boldsymbol}q) $-manifold, under the harmonic Hamiltonian and determining its overlap with the $({\boldsymbol}p', {\boldsymbol}q')$-manifold after time $t$. Such a treatment under a harmonic Hamiltonian results in just the absolute value of the prefactor of Eq. \[eq:contcenterrepvVMG\]: $\left| \det \left[ 1 + \frac{1}{2} {\boldsymbol}{\mathcal J} \frac{\partial^2 S_{tj}}{\partial {\boldsymbol}x^2} \right] \right|^{\frac{1}{2}}$. Indeed, this was van Vleck’s discovery before quantum mechanics was formalized [@Van28]. The relative phases of different classical contributions are no longer a concern and the higher order terms only weigh such contributions appropriately.
Here we are interested in propagating between Gaussian states in the center representation. In continuous systems, a Gaussian state in $n$ dimensions can be defined as: $$\label{eq:Psi}
\Psi_\beta({\boldsymbol}q) = \left[\pi^{-n} \det\left( \operatorname{\text{Re}}{{\boldsymbol}\Sigma}_\beta \right) \right]^{\frac{1}{4}} \exp \left( \varphi \right),$$ where $$\varphi = \frac{i}{\hbar} {\boldsymbol}p_\beta \cdot \left( {\boldsymbol}q - {{\boldsymbol}q}_\beta \right) - \frac{1}{2} \left( {\boldsymbol}q - {{\boldsymbol}q}_\beta \right)^T {{\boldsymbol}\Sigma}_\beta \left( {\boldsymbol}q - {\boldsymbol}q_\beta \right).$$ ${\boldsymbol}q_\beta \in \mathbb{R}^n$ is the central position, ${\boldsymbol}p_\beta \in \mathbb{R}^n$ is the central momentum, and ${\boldsymbol}\Sigma_\beta$ is a symmetric $n\times n$ matrix where $\operatorname{\text{Re}}{\boldsymbol}\Sigma_\beta$ is proportional to the spread of the Gaussian and $\operatorname{\text{Im}}{\boldsymbol}\Sigma_\beta$ captures $p$-$q$ correlation.
This state describes momentum states ($\delta({\boldsymbol}p - {\boldsymbol}p_\beta)$ in momentum representation) when ${\boldsymbol}\Sigma_\beta \rightarrow 0$ and position states $\delta({\boldsymbol}q- {\boldsymbol}q_\beta)$ when ${\boldsymbol}\Sigma_\beta \rightarrow \infty$. Rotations between these two cases corresponds to $\operatorname{\text{Re}}{\boldsymbol}\Sigma_\beta = 0$ and $\operatorname{\text{Im}}{\boldsymbol}\Sigma_\beta \ne 0$.
Gaussians remain Gaussians under evolution by a harmonic Hamiltonian, even if it is time-dependent. This can be shown by simply making the ansatz that the state remains a Gaussian and then solving for its time-dependent ${\boldsymbol}\Sigma_\beta$, ${\boldsymbol}p_\beta$, ${\boldsymbol}q_\beta$ and phase from the time-dependent Schrödinger equation [@Tannor07], or just by applying the analytically known Feynman path integral, which is equivalent to the van Vleck path integral, to a Gaussian [@Heller91].
Moreover, with the propagator in the center representation known to only have have one saddle-point contribution for a harmonic Hamiltonian, it is fairly straight-forward to show that this is also true for its coherent state representation (that is, taking a Gaussian to another Gaussian). Applying the propagator to an initial and final Gaussian in the center representation $$\begin{aligned}
&& \left[ {{\left|\Psi_\beta\right\rangle}{\left\langle \Psi_\beta\right|}} U_t {{\left|\Psi_\alpha\right\rangle}{\left\langle \Psi_\alpha\right|}} \right]_x ({\boldsymbol}x) = \\
&& \left( \pi \hbar \right)^{-3n} \int^\infty_{-\infty} \mbox d {\boldsymbol}x_1 \int^\infty_{-\infty} \mbox d {\boldsymbol}x_2 \, U_t({\boldsymbol}x_1 + {\boldsymbol}x_2- {\boldsymbol}x) \nonumber\\
&& \qquad \qquad \qquad \times {\Psi_\beta}_x({\boldsymbol}x_2) {\Psi_\alpha}_x ({\boldsymbol}x_1) \nonumber\\
&& \qquad \qquad \qquad \times e^{2\frac{i}{\hbar}({\boldsymbol}x_1^T {\boldsymbol}{\mathcal J} {{\boldsymbol}x_2} + {\boldsymbol}x_2^T {\boldsymbol}{\mathcal J} {{\boldsymbol}x} + {\boldsymbol}x^T {\boldsymbol}{\mathcal J} {{\boldsymbol}x_1})}, \nonumber\end{aligned}$$ we see that since $U_t$ is a Gaussian from Eq. \[eq:contcenterrepvVMG\] ($S_{tj}$ is quadratic for harmonic Hamiltonians) and since the Wigner representations of the Gaussians, ${\Psi_\alpha}_x$ and ${\Psi_\beta}_x$, are also known to be Gaussians, the full integral in the above equation is a Gaussian integral and thus evaluates to produce a Gaussian with a prefactor. This is equivalent to evaluating the integral by the method of steepest descents which finds the saddle points to be the points that satisfy ${\frac{\partial \phi}{\partial {\boldsymbol}x}} = 0$ where $\phi$ is the phase of the integrand’s argument. Since this argument is quadratic, its first derivative is linear and so again there is only one unique saddle point.
Indeed, such an evaluation produces the coherent state representation of the vVMG propagator [@Tomsovic15]. Just as we found with the center representation, the absolute value of its prefactor corresponds to the order $\hbar^0$ term.
As a consequence of this single contribution at order $\hbar^0$, it follows that the Wigner function of a state, $\Psi_x({\boldsymbol}x)$, evolves under the operator $\hat V$ with an underlying harmonic Hamiltonian by $\Psi_x({\boldsymbol}{\mathcal M}_{\hat V}({\boldsymbol}x + {\boldsymbol}\alpha_{\hat V}/2) + {\boldsymbol}\alpha_{\hat V}/2)$, where ${\boldsymbol}{\mathcal M}_{\hat V}$ is the symplectic matric and ${\boldsymbol}\alpha_{\hat V}$ is the translation vector associated with $\hat V$’s action [@Rivas99].
Before proceeding to the discrete case, we note that the center representation that we have defined allows for a particularly simple way to express how far the path integral treatment must be expanded in $\hbar$ in order to describe *any* unitary propagation (not necessarily harmonic) in continuous quantum mechanics.
Reflections and translations can also be described by truncating Eq. \[eq:contcenterrepvVMG\] at order $\hbar^1$ (or $\hbar^0$ if overall phase isn’t important) since they correspond to evolution under a harmonic Hamiltonian. In particular, translations are displacements along a chord ${\boldsymbol}\xi$ and so have Hamiltonians $H \propto {\boldsymbol}\xi_q \cdot {\boldsymbol}p - {\boldsymbol}\xi_p \cdot {\boldsymbol}q$. Reflections are symplectic rotations around a center ${\boldsymbol}x$ and so have Hamiltonians $H \propto \frac{\pi}{4}\left[ ({\boldsymbol}p- {\boldsymbol}x_p)^2 + ({\boldsymbol}q- {\boldsymbol}x_q)^2 \right]$.
From Eq. \[eq:contsupofreflections\] we see that any operator can be expressed as an infinite Riemann sum of reflections. Therefore, since reflections are fully described by a truncation at order $\hbar^1$, it follows that an infinite Riemann sum of path integral solutions truncated at order $\hbar^1$ can describe any unitary evolution. The same statement can be made by considering the chord representation in terms of translations.
Hence, quantum propagation in continuous systems can be fully treated by an infinite sum of contributions from a path integral approach truncated at order $\hbar^1$.
As an aside, in general this infinite sum isn’t convergent and so it is often more useful to consider reformulations that involve a sum with a finite number of contributions. One way to do this is to apply the method of steepest descents *directly* on the operator of interest and use the area between saddle points as the metric to determine the order of $\hbar$ necessary, instead of dealing with an infinity of reflections (or translations). This results in the semiclassical propagator already presented, but associated with the full Hamiltonian instead of a sum of reflection Hamiltonians.
In summary, we have explained why propagation between Gaussian states under Hamiltonians that are harmonic is simulable classically (i.e. up to order $\hbar^0$) in continuous systems. We will see that the same situation holds in discrete systems for stabilizer states, with the additional restriction that the propagation takes the phase space points, which are now discrete, to themselves.
Discrete Center-Chord Representation {#sec:discrete}
====================================
We now proceed to the discrete case and introduce the center-chord formalism for these systems. It will be useful for us to define a pair of conjugate degrees of freedom $p$ and $q$ for discrete systems. Unfortunately, this isn’t as straight-forward as in the continuous case, since the usual canonical commutation relations cannot hold in a finite-dimensional Hilbert space where the operators are bounded (since $\operatorname{Tr}[\hat p, \hat q] = 0$).
We begin in one degree of freedom. We label the computational basis for our system by $n \in {0, 1, \ldots, d-1}$, for $d$ odd and we assume that $d$ is odd for the rest of this paper. We identify the discrete position basis with the computational basis and define the “boost” operator as diagonal in this basis: $$\hat {Z}^{\delta p} {\left|n\right\rangle} \equiv \omega^{n \delta p} {\left|n\right\rangle},$$ where $\omega$ will be defined below.
We define the normalized discrete Fourier transform operator to be equivalent to the Hadamard gate: $$\hat {F} = \frac{1}{\sqrt{d}}\sum_{m,n \in \mathbb{Z}/d\mathbb{Z}} \omega^{m n} {\left|m\right\rangle}{\left\langle n\right|}.
\label{eq:hadamard}$$ This allows us to define the Fourier transform of $\hat {Z}$: $$\hat {X} \equiv \hat {F}^\dagger \hat {Z} \hat {F}$$ Again, as before, we call $\hat {X}$ the “shift” operator since $$\hat {X}^{\delta q} {\left|n\right\rangle} \equiv {\left|n\oplus\delta q\right\rangle},$$ where $\oplus$ denotes mod-$d$ integer addition. It follows that the Weyl relation holds again: $$\hat {Z} \hat {X} = \omega \hat {X} \hat {Z}.$$
The group generated by $\hat {Z}$ and $\hat {X}$ has a $d$-dimensional irreducible representation only if $\omega^d=1$ for odd $d$. Equivalently, there are only reflections relating any two phase space points on the Weyl phase space “grid” if $d$ is odd [@Rivas00]. We take $\omega \equiv \omega(d) = e^{2 \pi i/d}$ [@Sun92]. This was introduced by Weyl [@Weyl32].
Note that this means that $\hbar = \frac{d}{2 \pi}$ or $h = d$. This means that the classical regime is most closely reached when the dimensionality of the system is reduced ($d \rightarrow 0$) and thus the most “classical” system we can consider here is a qutrit (since we keep $d$ odd and greater than one). This is the opposite limit considered by many other approaches where the classical regime is reached when $d \rightarrow \infty$.
One way of interpreting the classical limit in this paper is by considering $h$ to be equal to the inverse of the density of states in phase space (i.e. in a Wigner unit cell). As $\hbar \rightarrow 0$ phase space area decreases as $\hbar^2$ but the number of states only decreases as $\hbar$ leading to an overall density increase of $\hbar$. This agrees with the notion that states should become point particles of fixed mass in the classical limit.
By analogy with continuous finite translation operators, we reexpress the shift $\hat {X}$ and boost $\hat {Z}$ operators in terms of conjugate $\hat { p}$ and $\hat { q}$ operators:\
$$\hat {Z} {\left|n\right\rangle} = e^{\frac{2 \pi i }{d}\hat q} {\left|n\right\rangle} = e^{\frac{2 \pi i}{d} n} {\left|n\right\rangle},$$ and $$\hat {X} {\left|n\right\rangle} = e^{-\frac{2 \pi i }{d} \hat p} {\left|n\right\rangle} = {\left|n\oplus1\right\rangle}.$$ Hence, in the diagonal “position” representation for $\hat Z$: $$\hat {Z} = \left( \begin{array}{cccccc} 1 & 0 & 0 & \cdots & 0\\
0 & e^{\frac{2 \pi i}{d}} & 0 & \cdots & 0\\
0 & 0 & e^{\frac{4 \pi i}{d}} & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & 0 & e^{\frac{2 (d-1) \pi i}{d}} \end{array} \right),$$ and $$\hat {X} = \left( \begin{array}{cccccc} 0 & 0 & \cdots & 0 & 1\\
1 & 0 & \cdots & 0 & 0\\
0 & 1 & 0 & \cdots & 0\\
\vdots & \ddots & \ddots & \ddots & \vdots\\
0 & \cdots & 0 & 1 & 0 \end{array} \right).$$
Thus, $$\hat {q} = \frac{d}{2 \pi i} \log \hat {Z} = \sum_{n \in \mathbb{Z}/d\mathbb{Z}} n {\left|n\right\rangle} {\left\langle n\right|},$$ and $$\hat { p} = \hat {F}^\dagger \hat { q} \hat {F}.$$
Therefore, we can interpret the operators $\hat { p}$ and $\hat { q}$ as a conjugate pair similar to conjugate momenta and position in the continuous case. However, they differ from the latter in that they only obey the weaker *group* commutation relation $$e^{i j \hat { q}/\hbar}e^{i k \hat { p}/\hbar} e^{-i j \hat { q}/\hbar}e^{-i k \hat { p}/\hbar} = e^{-i jk /\hbar} \hat {\mathbb I}.
\label{eq:groupcommrel}$$ This corresponds to the usual canonical commutation relation for $p$ and $q$’s algebra at the origin of the Lie group ($j = k = 0$); expanding both sides of Eq. \[eq:groupcommrel\] to first order in $p$ and $q$ yields the usual canonical relation.
We proceed to introduce the Weyl representation of operators and states in discrete Hilbert spaces with odd dimension $d$ and $n$ degrees of freedom [@Wootters87; @Wootters03; @Gibbons04]. The generalized phase space translation operator (the Weyl operator) is defined as a product of the shift and boost with a phase appropriate to the $d$-dimensional space: $$\hat {T}({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) = e^{-i \frac{\pi}{d} {\boldsymbol}\xi_p \cdot {\boldsymbol}\xi_q} \hat {Z}^{ {\boldsymbol}\xi_p} \hat {X}^{ {\boldsymbol}\xi_q},$$ where ${\boldsymbol}\xi \equiv ({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) \in (\mathbb{Z} / d \mathbb{Z})^{2n}$ and form a discrete “web” or “grid” of chords. They are a discrete subset of the continous chords we considered in the infinite-dimensional context in Section \[sec:contcenterchord\] and their finite number is an important consequence of the discretization of the continuous Weyl formalism.
Again, an operator $\hat {A}$ can be expressed as a linear combination of translations: $$\hat {A} = d^{-n} \sum_{\substack{{\boldsymbol}\xi_p, {\boldsymbol}\xi_q \in \\ (\mathbb{Z} / d \mathbb{Z})^{ n}}} A_\xi({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) \hat {T}({\boldsymbol}\xi_p, {\boldsymbol}\xi_q),$$ where the weights are the chord representation of the function $\hat {A}$: $${A}_\xi({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) = d^{-n} \operatorname{Tr}\left( \hat {T}({\boldsymbol}\xi_p, {\boldsymbol}\xi_q)^\dagger \hat {A} \right).$$ When applied to a state $\hat \rho$, this is also called the “characteristic function” of $\hat \rho$ [@Ferrie11].
As before, the center representation, based on reflections instead of translations, requires an appropriately defined reflection operator. We can define the discrete reflection operator $\hat {R}$ as the symplectic Fourier transform of the discrete translation operator we just introduced: $$\hat {R}({\boldsymbol}x_p, {\boldsymbol}x_q) = d^{-n} \sum_{\substack{{\boldsymbol}\xi_p, {\boldsymbol}\xi_q \in \\ (\mathbb{Z} / d \mathbb{Z})^{ n}}} e^{\frac{2 \pi i}{d} ({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) {\boldsymbol}{\mathcal J} ({\boldsymbol}x_p, {\boldsymbol}x_q)^T} \hat {T}({\boldsymbol}\xi_p, {\boldsymbol}\xi_q).$$
With this in hand, we can now express a finite-dimensional operator $\hat {A}$ as a superposition of reflections: $$\hat {A} = d^{-n} \sum_{\substack{{\boldsymbol}x_p, {\boldsymbol}x_q \in \\ (\mathbb{Z} / d \mathbb{Z})^{ n}}} {A}_x({\boldsymbol}x_p, {\boldsymbol}x_q) \hat {R}( {\boldsymbol}x_p, {\boldsymbol}x_q ),
\label{eq:discretesupofreflections}$$ where $${A}_x({\boldsymbol}x_p, {\boldsymbol}x_q) = d^{-n} \operatorname{Tr}\left( \hat {R}({\boldsymbol}x_p, {\boldsymbol}x_q)^\dagger \hat {A} \right).
\label{eq:weylfunction}$$ ${\boldsymbol}x \equiv ({\boldsymbol}x_p, {\boldsymbol}x_q) \in (\mathbb{Z} / d \mathbb{Z})^{2n}$ are centers or Weyl phase space points and, like their $({\boldsymbol}\xi_p, {\boldsymbol}\xi_q)$ brethren, form a discrete subgrid of the continuous Weyl phase space points considered in Section \[sec:contcenterchord\].
Again, the center representation is of particular interest to us because for unitary gates $\hat A$ we can rewrite the components ${A}_x({\boldsymbol}x_p, {\boldsymbol}x_q)$ as: $${A}_x({\boldsymbol}x_p, {\boldsymbol}x_q) = \exp \left[\frac{i}{\hbar} S({\boldsymbol}x_p, {\boldsymbol}x_q)\right]$$ where $S({\boldsymbol}x_p, {\boldsymbol}x_q)$ is the argument to the exponential, and is equivalent to the action of the operator in center representation (the center generating function).
Aside from Eq. \[eq:weylfunction\], the center representation of a state $\hat \rho$ can also be directly defined as the symplectic Fourier transform of its chord representation, $\rho_\xi$ [@Gross06]: $$\rho_x({\boldsymbol}x_p, {\boldsymbol}x_q) = d^{-n} \sum_{\substack{{\boldsymbol}\xi_p, {\boldsymbol}\xi_q \in \\ (\mathbb{Z} / d \mathbb{Z})^{ n}}} e^{\frac{2 \pi i}{d} ({\boldsymbol}\xi_p, {\boldsymbol}\xi_q) {\boldsymbol}{\mathcal J} ({\boldsymbol}x_p, {\boldsymbol}x_q)^T} \rho_\xi({\boldsymbol}\xi_p,{\boldsymbol}\xi_q).
\label{eq:weylfunction2}$$
We note again that for a pure state ${\left|\Psi\right\rangle}$, the Wigner function from Eqs. \[eq:weylfunction\] and \[eq:weylfunction2\] simplifies to:
$$\begin{aligned}
\label{eq:weylfunctionpurestatediscrete}
{\Psi}_x({\boldsymbol}x_p, {\boldsymbol}x_q) &=& d^{-n} \sum_{\substack{{\boldsymbol}\xi_q \in \\(\mathbb{Z} / d \mathbb{Z})^{ n}}} e^{-\frac{2 \pi i}{d} {\boldsymbol}\xi_q \cdot {\boldsymbol}x_p} \Psi \left( {\boldsymbol}x_q + \frac{(d+1) {\boldsymbol}\xi_q }{2} \right) {\Psi^*} \left( {\boldsymbol}x_q - \frac{(d+1) {\boldsymbol}\xi_q}{2} \right).\end{aligned}$$
It may be clear from this short presentation that the chord and center representations are dual to each other. A thorough review of this subject can be found in [@Almeida98].
Path Integral Propagation in Discrete Systems {#sec:discretesemi}
=============================================
Rivas and Almeida [@Rivas99] found that the continuous infinite-dimensional vVMG propagator can be extended to finite Hilbert space by simply projecting it onto its finite phase space tori. This produces: $$\begin{aligned}
\label{eq:discretecenterrepvVMG}
&& {U}_t({\boldsymbol}x) = \\
&& \left< \sum_j \left\{ \det \left[ 1 + \frac{1}{2} {\boldsymbol}{\mathcal J} \frac{\partial^2 S_{tj}}{\partial {\boldsymbol}{x}^2} \right] \right\}^{\frac{1}{2}} e^{\frac{i}{\hbar} S_{tj}({\boldsymbol}x)} e^{i \theta_k} \right>_k + \mathcal{O}(\hbar^2),\nonumber
\end{aligned}$$ where an additional average must be taken over the $k$ center points that are equivalent because of the periodic boundary conditions. Maintaining periodicity requires that they accrue a phase $\theta_k$ [^2]. The derivative $\frac{\partial^2 S_{tj}}{\partial {\boldsymbol}{x}^2}$ is performed over the continuous function $S_{tj}$ defined after Eq. \[eq:contcenterrepvVMG\], but only evaluated at the discrete Weyl phase space points ${\boldsymbol}x \equiv ({\boldsymbol}x_p, {\boldsymbol}x_q)\in \left(\mathbb{Z}/d\mathbb{Z}\right)^{2n}$. The prefactor can also be reexpressed: $$\left\{ \det \left[ 1 + \frac{1}{2} {\boldsymbol}{\mathcal J} \frac{\partial^2 S_{tj}}{\partial {\boldsymbol}{x}^2} \right] \right\}^{\frac{1}{2}} = \left\{ 2^d \det \left[ 1 + {\boldsymbol}{\mathcal M_{tj}} \right] \right\}^{-\frac{1}{2}},$$ which is perhaps more pleasing in the discrete case as it does not involve a continuous derivative.
As in the continuous case, for a harmonic Hamiltonian $H({\boldsymbol}p, {\boldsymbol}q)$, the center generating function $S({\boldsymbol}x_p, {\boldsymbol}x_q)$ is equal to ${\boldsymbol}{\alpha}^T {\boldsymbol}{\mathcal J} {\boldsymbol}x + {\boldsymbol}{x}^T {\boldsymbol}{\mathcal B} {\boldsymbol}x$ where Eq. \[eq:cayleyparam\] and Eq. \[eq:quadmap\] hold. Moreover, if the Hamiltonian takes Weyl phase space points to themselves, then by the same equations it follows that ${\boldsymbol}{\mathcal M}$ and ${\boldsymbol}\alpha$ must have integer entries.
This implies that for ${\boldsymbol}m, {\boldsymbol}n \in \mathbb{Z}^{ n}$, $$\begin{aligned}
&& {\boldsymbol}{\mathcal M} \left(\begin{array}{c}{\boldsymbol}p+ {\boldsymbol}m d + {\boldsymbol}\alpha_p/2\\{\boldsymbol}q + {\boldsymbol}n d + {\boldsymbol}\alpha_q/2\end{array}\right) + \left( \begin{array}{c} {\boldsymbol}\alpha_p/2\\ {\boldsymbol}\alpha_q/2 \end{array} \right) \nonumber\\
\label{eq:harmonicM}
&=& {\boldsymbol}{\mathcal M} \left(\begin{array}{c}{\boldsymbol}p + {\boldsymbol}\alpha_p/2 \\ {\boldsymbol}q + {\boldsymbol}\alpha_q/2 \end{array}\right) + \left( \begin{array}{c} {\boldsymbol}\alpha_p/2\\ {\boldsymbol}\alpha_q/2 \end{array} \right) + d {\boldsymbol}{\mathcal M}\left(\begin{array}{c}{\boldsymbol}m\\ {\boldsymbol}n\end{array}\right) \\
&=& \left(\begin{array}{c}{\boldsymbol}p'\\ {\boldsymbol}q'\end{array}\right) \mod {\boldsymbol}d. \nonumber\end{aligned}$$ Therefore, phase space points $({\boldsymbol}p, {\boldsymbol}q)$ that lie on Weyl phase space points go to the equivalent Weyl phase space points $({\boldsymbol}p', {\boldsymbol}q')$.
Moreover, again if ${\boldsymbol}m, {\boldsymbol}n \in \mathbb{Z}^n$, $$\begin{aligned}
&& S({\boldsymbol}x_p + {\boldsymbol}m d, {\boldsymbol}x_q + {\boldsymbol}n d) \nonumber\\
&=& \left( \begin{array}{c}{\boldsymbol}x_p+ {\boldsymbol}m d\\ {\boldsymbol}x_q+ {\boldsymbol}n d\end{array} \right)^T {\boldsymbol}A \left( \begin{array}{c}{\boldsymbol}x_p+ {\boldsymbol}m d\\ {\boldsymbol}x_q+ {\boldsymbol}n d\end{array} \right) \nonumber\\
&& + {\boldsymbol}b \cdot \left( \begin{array}{c}{\boldsymbol}x_p+ {\boldsymbol}m d\\ {\boldsymbol}x_q+ {\boldsymbol}n d\end{array} \right)\\
&=& \left( \begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array} \right)^T {\boldsymbol}A \left( \begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array} \right) + {\boldsymbol}b \cdot \left( \begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array} \right) \nonumber\\
&& + d \left[ 2 \left( \begin{array}{c}{\boldsymbol}x_p\\ {\boldsymbol}x_q\end{array} \right)^T {\boldsymbol}A \left( \begin{array}{c}{\boldsymbol}m\\ {\boldsymbol}n\end{array} \right) \right.\nonumber\\
&& \qquad \left.+ d \left( \begin{array}{c}{\boldsymbol}m\\ {\boldsymbol}n\end{array} \right)^T {\boldsymbol}A \left( \begin{array}{c}{\boldsymbol}m\\ {\boldsymbol}n\end{array} \right) + {\boldsymbol}b \cdot \left( \begin{array}{c}{\boldsymbol}m\\ {\boldsymbol}n\end{array} \right)\right] \nonumber\\
\label{eq:harmonicS}
&=& S({\boldsymbol}x_p, {\boldsymbol}x_q) \mod d, \nonumber\end{aligned}$$ for some symmetric ${\boldsymbol}A \in \mathbb{Z}^{n\times n}$ and ${\boldsymbol}b \in \mathbb{Z}^{n}$. Therefore, these equivalent trajectories also have equivalent actions (since the action is multiplied by $\frac{2 \pi i}{d}$ and exponentiated).
Hence, there is only one term to the sum in Eq. \[eq:discretecenterrepvVMG\]. Moreover, [@Rivas00] showed that the sum over the phases $\theta_k$ produces only a global phase that can be factored out. Therefore, if we can neglect the overall phase, $${U}_t({\boldsymbol}x) = \left| 2^d \det \left[ 1 + {\boldsymbol}{\mathcal M} \right] \right|^{\frac{1}{2}},
\label{eq:discretesemiproplowestorder}$$ where the classical trajectories whose centers are $({\boldsymbol}x_p, {\boldsymbol}x_q)$ satisfy the periodic boundary conditions.
As in the continuous case, we point out that this means that translations and reflections are fully captured by a path integral treatment that is truncated at order $\hbar^1$ (or order $\hbar^0$ if their overall phase isn’t important) because their Hamiltonians are harmonic, but in the discrete case there is an additional requirement that they are evaluated at chords/centers that take Weyl phase space points to themselves.
Just as in the continuous case, the single contribution at order $\hbar^0$ implies that the propagator of the Wigner function of states, ${{\left|\Psi\right\rangle}{\left\langle \Psi\right|}}_x({\boldsymbol}x)$ under gates $\hat V$ with underlying harmonic Hamiltonians is captured by ${{\left|\Psi\right\rangle}{\left\langle \Psi\right|}}_x({\boldsymbol}{\mathcal M}_{\hat V} ({\boldsymbol}x + {\boldsymbol}\alpha_{\hat V}/2) + {\boldsymbol}\alpha_{\hat V}/2)$ for ${\boldsymbol}{\mathcal M}_{\hat V}$ and ${\boldsymbol}\alpha_{\hat V}$ associated with $\hat V$.
Stabilizer Group {#sec:stabilizergroup}
================
Here we will show that the Hamiltonians corresponding to Clifford gates are harmonic and take Weyl phase space points to themselves. Thus they can be captured by only the single contribution of Eq. \[eq:discretesemiproplowestorder\] at lowest order in $\hbar$. This then implies that stabilizer states can also be propagated to each other by Clifford gates with only a single contribution to the sum in Eq. \[eq:discretecenterrepvVMG\].
The Clifford gate set of interest can be defined by three generators: a single qudit Hadamard gate $\hat{F}$ and phase shift gate $\hat{P}$, as well as the two qudit controlled-not gate $\hat{C}$. We examine each of these in turn.
Hadamard Gate
-------------
The Hadamard gate was defined in Eq. \[eq:hadamard\] and is a rotation by $\frac{\pi}{2}$ in phase space counter-clockwise. Hence, for one qudit, it can be written as the map in Eq. \[eq:harmonicM\] where $$\label{eq:stabmathad}
{\boldsymbol}{\mathcal M}_{\hat {F}} = \left( \begin{array}{cc} 0 & 1\\ -1 & 0 \end{array} \right),$$ and ${\boldsymbol}\alpha_{\hat {F}} = (0,0)$. We have set $t=1$ and drop it from the subscripts from now on. Since ${\boldsymbol}\alpha$ is vanishing and ${\boldsymbol}{\mathcal M}$ has integer entries, this is a cat map and such maps have been shown to correspond to Hamiltonians [@Keating91] $$\begin{aligned}
\label{eq:hadhamiltonian}
&&H(p,q) =\\
&&f(\operatorname{Tr}{\boldsymbol}{\mathcal M} ) \left[ \mathcal M_{12} p^2 - \mathcal M_{21} q^2 + \left(\mathcal M_{11} - \mathcal M_{22} \right) pq \right],\nonumber\end{aligned}$$ where $$f(x) = \frac{\sinh^{-1}(\frac{1}{2}\sqrt{x^2-4})}{\sqrt{x^2-4}}.$$ For the Hadamard ${\boldsymbol}{\mathcal M}_{\hat {F}}$ this corresponds to $H_{\hat {F}} = \frac{\pi}{4} (p^2 + q^2)$, a harmonic oscillator. The center generating function $S(x_p, x_q)$ is thus $(x_p, x_q ) {\boldsymbol}{\mathcal B} (x_p, x_q)^T$ and solving Eq. \[eq:cayleyparam\] finds for the one-qudit Hadamard, $${\boldsymbol}{\mathcal{B}}_{\hat {F}} = \left(\begin{array}{cc}1 & 0\\ 0 & 1 \end{array} \right).$$ Thus, $S_{\hat {F}}(x_p, x_q) = x_p^2 + x_q^2$. Indeed, applying Eq. \[eq:weylfunction\] to Eq. \[eq:hadamard\] reveals that the Hadamard’s center function (up to a phase) is: $${F}_x(x_p,x_q) = e^{\frac{2 \pi i}{d}(x_p^2+x_q^2)}.
\label{eq:hadamardcenterfn}$$
Eq. \[eq:stabmathad\] shows how to map Weyl phase space to Weyl phase space under the Hadamard transformation. Furthermore this map is pointwise, which implies the quadratic form of the center generating function obtained in Eq. \[eq:hadamardcenterfn\].
Phase Shift Gate
----------------
The phase shift gate can be generalized to odd $d$-dimensions [@Gottesman99] by setting it to: $$\hat{P} = \sum_{j \in \mathbb{Z}/d \mathbb{Z}} \omega^{\frac{(j-1)j}{2}} {{\left|j\right\rangle}{\left\langle j\right|}}.
\label{eq:phaseshift}$$ Examining its effect on stabilizer states, it is clear that it is a $q$-shear in phase space from an origin displaced by $\frac{d-1}{2}\equiv-\frac{1}{2}$ to the right. This can be expressed as the map in Eq. \[eq:harmonicM\] with $${\boldsymbol}{\mathcal M}_{\hat {P}} = \left( \begin{array}{cc} 1 & 1\\ 0 & 1 \end{array} \right),$$ and ${\boldsymbol}\alpha_{\hat {P}} = \left( -\frac{1}{2}, 0 \right)$.
This corresponds to $${\boldsymbol}{\mathcal B}_{\hat {P}} = \left( \begin{array}{cc} 0 & 0\\ 0 & \frac{1}{2} \end{array} \right).$$
Solving Eq. \[eq:quadcentgenfunction\] with this ${\boldsymbol}{\mathcal B}_{\hat {P}}$ and ${\boldsymbol}{\alpha}_{\hat {P}}$ reveals that $S_{\hat {P}}(x_p, x_q) = -\frac{1}{2} x_q + \frac{1}{2} x_q^2$. Again, this agrees with the argument of the center representation of the phase-shift gate obtained by applying Eq. \[eq:weylfunction\] to Eq. \[eq:phaseshift\]: $${P}_x(x_p,x_q) = e^{\frac{2 \pi i}{d} \frac{1}{2} (-x_q + x_q^2)}.$$ Discretization the equations of motion for harmonic evolution for unit timesteps leads to: $$\left(\begin{array}{c}{\boldsymbol}p'\\ {\boldsymbol}q'\end{array}\right) = \left(\begin{array}{c}{\boldsymbol}p\\ {\boldsymbol}q\end{array}\right) + {\boldsymbol}{\mathcal J} \left({\frac{\partial H}{\partial {\boldsymbol}p}}, {\frac{\partial H}{\partial {\boldsymbol}q}}\right)^T,
\label{eq:harmonicevol}$$ where the last derivative is on the continuous function $H$, but only evaluated on the discrete Weyl phase space points. It follows that $$\label{eq:phaseshifthamiltonian}
H_{\hat {P}} = -\frac{d+1}{2} q^2 + \frac{d+1}{2} q.$$
We have obtained the Hamiltonian for the phase-shift gate by a different procedure than that used for the Hadamard where we appealed to the result given in Eq. \[eq:hadhamiltonian\] for quantum cat maps. However, as the phase-shift gate is a quantum cat map as well, we could have obtained Eq. \[eq:phaseshifthamiltonian\] in this manner. Similarly, the approach we used to find the phase-shift Hamiltonian by discretizing time in Eq. \[eq:harmonicevol\] would work for the Hadamard gate but it is a bit more involved since the latter contains both $p$- and $q$-evolution. Nevertheless, this produces Eq. \[eq:hadhamiltonian\] as well. We presented both techniques for illustrative purposes.
Controlled-Not Gate
-------------------
Lastly, the controlled-not gate can be generalized to $d$-dimensions [@Gottesman99] by $$\hat{C} = \sum_{j,k \in \mathbb{Z}/d \mathbb{Z}} {{\left|j,k \oplus j\right\rangle}{\left\langle j,k\right|}}.$$ It is clear that this translates the $q$-state of the second qudit by the $q$-state of the first qudit. As a result, as is evident by examining the gate’s action on stabilizer states, the first qudit experiences an “equal and opposite reaction” force that kicks its momentum by the $q$-state of the second qudit. This is the phase space picture of the well-known fact that a CNOT examined in the $\hat {X}$ basis has the control and target reversed with respect to the $\hat Z$ basis. This can also seen by looking at its effect in the momentum ($\hat {X}$) basis: $${\hat {F}}^\dagger \hat{C} \hat {F} = \sum_{j,k \in \mathbb{Z}/d \mathbb{Z}} {{\left|j \ominus k,k\right\rangle}{\left\langle j,k\right|}}.$$ As a result, this gate is described by the map: $${\boldsymbol}{\mathcal M}_{\hat {C}} = \left( \begin{array}{ccccc} 1 & -1 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1\end{array}\right),$$ and ${\boldsymbol}\alpha_{\hat {C}} = (0,0,0,0)$. This corresponds to $${\boldsymbol}{\mathcal B}_{\hat {C}} = \left( \begin{array}{ccccc} 0 & 0 & 0 & 0\\ 0 & 0 & -\frac{1}{2} & 0\\ 0 & -\frac{1}{2} & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right).$$ Hence its center generating function $S_{\hat {C}}({\boldsymbol}x_p, {\boldsymbol}x_q) = -x_{p_2} x_{q_1}$. Again, this corresponds with the argument of the center representation of the controlled-not gate, which can be found to be: $${C}_x({\boldsymbol}x_p, {\boldsymbol}x_q) = e^{-\frac{2 \pi i}{d} x_{q_1} x_{p_2}}.$$ Therefore, this gate can be seen to be a bilinear $p$-$q$ coupling between two qudits and corresponds to the Hamiltonian $$H_{\hat {C}} = p_1 q_2,$$ as can be found from Eq. \[eq:harmonicevol\] again.
As a result, it is now clear that all the Clifford group gates have Hamiltonians that are harmonic and that take Weyl phase space points to themselves. Therefore, their propagation can be fully described by a truncation of the semiclassical propagator Eq. \[eq:discretecenterrepvVMG\] to order $\hbar^0$ as in Eq. \[eq:discretesemiproplowestorder\] and they are manifestly classical in this sense.
To summarize the results of this section, using Eq. \[eq:discretesupofreflections\], the Hadamard, phase shift, and CNOT gates can be written as: $$\hat{F} = d^{-2} \sum_{\substack{x_p,x_q, \\ \xi_p,\xi_q \in\\ \mathbb{Z}/d \mathbb{Z}}} e^{ -\frac{2 \pi i}{d} \left[ -(x_p^2 + x_q^2) - d \left(x_p \xi_p - x_q \xi_q \right) \right] } \hat {Z}^{\xi_p} \hat {X}^{\xi_q},$$ $$\hat{P} = d^{-2} \sum_{\substack{x_p,x_q, \\ \xi_p,\xi_q \in\\ \mathbb{Z}/d \mathbb{Z}}} e^{ -\frac{2 \pi i}{d} \left[ \frac{1}{2} (x_q - x_q^2) - d \left(x_p \xi_q - x_q \xi_p \right) \right] } \hat {Z}^{\xi_p} \hat {X}^{\xi_q},$$ and $$\begin{aligned}
&&\hat{C} = \\
&& d^{-4} \sum_{\substack{{\boldsymbol}x_p, {\boldsymbol}x_q, \\ {\boldsymbol}\xi_p, {\boldsymbol}\xi_q \in \\ (\mathbb{Z}/d \mathbb{Z})^{ 2}}} e^{ -\frac{2 \pi i}{d} \left[ x_{q_1} x_{p_2} - d \left({\boldsymbol}x_p \cdot {\boldsymbol}\xi_q - {\boldsymbol}x_q \cdot {\boldsymbol}\xi_p \right) \right] } \hat {Z}^{ {\boldsymbol}\xi_p} \hat {X}^{ {\boldsymbol}\xi_q}, \nonumber\end{aligned}$$ (up to a phase). This form emphasizes their quadratic nature.
As for the continuous case, there exists a particularly simple way in discrete systems to see to what order in $\hbar$ the path integral must be kept to handle unitary propagation beyond the Clifford group. We describe this in the next section.
We note that the center generating actions $S({\boldsymbol}x_p, {\boldsymbol}x_q)$ found here are related to the $G({\boldsymbol}q', {\boldsymbol}q)$ found by Penney *et al*. [@Penney16], which are in terms of initial and final positions, by symmetrized Legendre transform [@Almeida98]: $$\label{eq:actionfromsymmetrizedlegendretransform}
G({\boldsymbol}q', {\boldsymbol}q, t) = F\left(\frac{{\boldsymbol}q' + {\boldsymbol}q}{2}, {\boldsymbol}p({\boldsymbol}q' - {\boldsymbol}q)\right),$$ where the canonical generating function $$F\left(\frac{{\boldsymbol}q' + {\boldsymbol}q}{2}, {\boldsymbol}p\right) = S\left( {\boldsymbol}x_p = {\boldsymbol}p, {\boldsymbol}x_q = \frac{{\boldsymbol}q' + {\boldsymbol}q}{2} \right) + {\boldsymbol}p \cdot \left( {\boldsymbol}q' - {\boldsymbol}q \right),$$ for ${\boldsymbol}p ({\boldsymbol}q' - {\boldsymbol}q)$ given implicitly by $\frac{\partial F}{\partial {\boldsymbol}p} = 0$.
Applying this to the actions we found reveals that $$G_{\hat {F}}(q', q, t) = q' q,$$ $$G_{\hat {P}}(q', q, t) = \frac{d+1}{2} (q^2 - q),$$ and $$G_{\hat {C}}((q'_1, q'_2), (q_1, q_2), t) = 0,$$ which is in agreement with [@Penney16].
Classicality of Stabilizer States
---------------------------------
In this subsection we show that stabilizer states evolve to stabilizer states under Clifford gates, and that it is possible to describe this evolution classically. For odd $d \ge 3$ the positivity of the Wigner representation implies that evolution of stabilizer states is non-contextual, and so here we are investigating in detail what this means in our semiclassical picture.
To begin, it is instructive to see the form stabilizer states take in the discrete position representation and in the center representation. Gross proved that [@Gross06]:
\[thm:stabstates\]Let $d$ be odd and $\Psi \in L^2((\mathbb Z/d\mathbb Z)^n)$ be a state vector. The Wigner function of $\Psi$ is non-negative if and only if $\Psi$ is a stabilizer state.
Gross also proved [@Gross06]
\[cor:stabstates\] Given that $\Psi({\boldsymbol}q) \ne 0$ $\forall\, {\boldsymbol}q$, a vector $\Psi$ is a stabilizer state if and only if it is of the form $$\Psi_{\theta_\beta,\eta_\beta}({\boldsymbol}q) \propto \exp \left[\frac{2 \pi i}{d} \left( {\boldsymbol}q^T {\boldsymbol}\theta_\beta {\boldsymbol}q + {\boldsymbol}\eta_\beta \cdot {\boldsymbol}q \right) \right].
\label{eq:stabstateposrep}$$ where ${\boldsymbol}\theta_\beta \in \left(\mathbb{Z}/d\mathbb{Z}\right)^{n\times n}$ and ${\boldsymbol}q, {\boldsymbol}\eta_\beta \in \left(\mathbb{Z}/d\mathbb{Z}\right)^n$.
Applying Eq. \[eq:weylfunctionpurestatediscrete\] to Eq. \[eq:stabstateposrep\], the Wigner function of such maximally supported stabilizer states can be found to be: $$\begin{aligned}
&& {\Psi_{\theta_\beta,\eta_\beta}}_x({\boldsymbol}x_p, {\boldsymbol}x_q) \propto\\
&& d^{-n} \sum_{\substack{{\boldsymbol}\xi_q \in\\\left(\mathbb{Z}/d\mathbb{Z}\right)^{ n}}} \exp \left[\frac{2 \pi i}{d} {\boldsymbol}\xi_q \cdot \left( {\boldsymbol}\eta_\beta - {\boldsymbol}x_p + 2 {\boldsymbol}\theta_\beta {\boldsymbol}x_q \right)\right].\nonumber\end{aligned}$$ Therefore, one finds that the Wigner function is the discrete Fourier sum equal to $\delta_{{\boldsymbol}\eta_\beta - {\boldsymbol}x_p + 2{\boldsymbol}\theta_\beta {\boldsymbol}x_q}$. For ${\boldsymbol}\theta_\beta = 0$ the state is a momentum state at ${\boldsymbol}x_p$. Finite ${\boldsymbol}\theta_\beta$ rotates that momentum state in phase space in “steps” such that it always lies along the discrete Weyl phase space points $({\boldsymbol}x_p, {\boldsymbol}x_q) \in (\mathbb{Z}/d\mathbb{Z})^{2n}$.
This Gaussian expression only captures stabilizer states that are maximally supported in $q$-space. One may wonder what the stabilizer states that aren’t maximally supported in $q$-space look like in Weyl phase space. Of course, it is possible that some may be maximally supported in $p$-space and so can be captured by the following corollary:
\[cor:stabstates2\] If $\Psi(p) \ne 0$ for all $p$’s then there exists a ${\boldsymbol}\theta_{\beta p} \in \left(\mathbb{Z}/d\mathbb{Z}\right)^{n\times n}$ and an ${\boldsymbol}\eta_{\beta p} \in \left(\mathbb{Z}/d\mathbb{Z}\right)^n$ such that $$\Psi_{\theta_{\beta p},\eta_{\beta p}}({\boldsymbol}p) \propto \exp \left[\frac{2 \pi i}{d} \left( {\boldsymbol}p^T {\boldsymbol}\theta_{\beta p} {\boldsymbol}p + {\boldsymbol}\eta_{\beta p} \cdot {\boldsymbol}p \right) \right].
\label{eq:stabstatemomrep}$$
This can be shown following the same methods employed by Gross [@Gross06] but in the discrete $p$-basis.
Unfortunately, it is easy to show that Corollary \[cor:stabstates\] and \[cor:stabstates2\] do not provide an expression for all stabilizer states (except for the odd prime $d$ case, as we shall see shortly) as there exist stabilizer states for odd non-prime $d$ that are not maximally supported in $p$- or $q$-space or any finite rotation between those two. To find an expression that encompasses all stabilizer states, we must turn to the Wigner function of stabilizer states.
An equivalent definition of stabilizer states on $n$ qudits is given by states $\hat {V} \underbrace{{\left|0\right\rangle} \otimes \cdots \otimes {\left|0\right\rangle}}_{n}$ where $\hat {V}$ is a quantum circuit consisting of Clifford gates. We know that the Clifford circuits are generated by the $\hat {P}$, $\hat {F}$ and $\hat {C}$ gates, and that the Wigner functions $\Psi_x({\boldsymbol}x)$ of stabilizer states propagate under $\hat {V}$ as $\Psi_x({\boldsymbol}{\mathcal M}_{\hat {V}} ({\boldsymbol}x + {\boldsymbol}\alpha_{\hat {V}}/2) + {\boldsymbol}\alpha_{\hat {V}}/2)$, it follows that the Wigner function of stabilizer states is: $$\delta_{{\boldsymbol}\Phi_0 \cdot {\boldsymbol}{\mathcal M}_{\hat {V}} \cdot {\boldsymbol}x, {\boldsymbol}r_0},
\label{eq:wignerfnofstabstateoddd}$$ where ${\boldsymbol}\Phi_0 = \left(\begin{array}{cc} 0 & 0\\ 0 & \mathbb{I}_n\end{array} \right)$ and ${\boldsymbol}r_0 = ({\boldsymbol}0, {\boldsymbol}0)$. We have therefore proved the next theorem:
\[thm:wigfnofstabstates\] The Wigner function $\Psi_x({\boldsymbol}x)$ of a stabilizer state for any odd $d$ and $n$ qudits is $\delta_{{\boldsymbol}\Phi \cdot {\boldsymbol}x, {\boldsymbol}r}$ for $2n \times 2n$ matrix ${\boldsymbol}\Phi$ and $2n$ vector ${\boldsymbol}r$.
As an aside, Theorem \[thm:wigfnofstabstates\] allows us to develop an all-encompassing Gaussian expression for stabilizer states for the restricted case that $d$ is odd prime. In this case, the following Corollary shows that a “mixed” representation is always possible: where each degree of freedom is expressed in either the $p$- or $q$-basis:
\[cor:stabstatemixedrep\] For odd prime $d$, if $\Psi$ is a stabilizer state for $n$ qudits, then there always exists a mixed representation in position and momentum such that: $$\Psi_{\theta_{\beta {\boldsymbol}x},\eta_{\beta {\boldsymbol}x}}({\boldsymbol}x) = \frac{1}{\sqrt{d}} \exp \left[\frac{2 \pi i}{d} \left( {\boldsymbol}x^T {\boldsymbol}\theta_{\beta {\boldsymbol}x} {\boldsymbol}x + {\boldsymbol}\eta_{\beta {\boldsymbol}x} \cdot {\boldsymbol}x \right) \right],
\label{eq:stabstatemixedrep}$$ where $x_i$ can be either $p_i$ or $q_i$.
We begin with a one-qudit case. We examine the equation specified by ${\boldsymbol}\Phi \cdot {\boldsymbol}x = {\boldsymbol}r$: $$\alpha q_1 + \beta p_1 = \gamma$$ for $\alpha$, $\beta$, and $\gamma \in \mathbb{Z}/d\mathbb{Z}$. If $\alpha = 0$ then $\Psi(q_1)$ is maximally supported and if $\beta = 0$ then $\Psi(p_1)$ is maximally supported since the equation specifies a line on $\mathbb Z/d \mathbb Z$ in $q_1$ and $p_1$ respectively. If $\alpha \ne 0$ and $\beta \ne 0$ then the equation can be rewritten as $$q_1 + (\beta/\alpha) p_1 = \gamma/\alpha,$$ and it follows that $q_1$ can take any values on $\mathbb Z/d \mathbb Z$ and so $\Psi(q_1)$ is maximally supported. However, it is also possible to reexpress the equation as: $$p_1 + (\alpha/\beta) q_1 = \gamma/\beta.$$ It follows that $p_1$ can also take any values on $\mathbb Z/d \mathbb Z$—$\Psi(p_1)$ is also maximally supported. Therefore, one can always choose either a $p_1$- or $q_1$-basis such that the state is maximally supported and so is representable by a Gaussian function.
We now consider adding another qudit such that the state becomes ${\Psi}_x(p_1,p_2,q_1,q_2)$. There are now two equations specified by ${\boldsymbol}\Phi \cdot {\boldsymbol}x = {\boldsymbol}r$ and it follows that it is always possible to combine the two equations such that $p_1$ and $q_1$ are only in one equation and written in terms of each other (and generally the second degree of freedom): $$\alpha q_1 + \beta p_1 + \gamma q_2 + \delta p_2 = \epsilon,$$ for $\alpha$, $\beta$, $\gamma$, $\delta$ and $\epsilon \in \mathbb{Z}/d\mathbb{Z}$. It will turn out that the $\gamma q_2 + \delta p_2$ term is irrelevant. We can rewrite the above equation as: $$q_1 + (\beta/\alpha) p_1 + (\gamma/\alpha) q_2 + (\delta/\alpha) p_2 = \epsilon/\alpha,$$ if $\alpha \ne 0$. Since there is no other equation specifying $p_1$, this is an equation for a line on $\mathbb Z/ d \mathbb Z$ and so $\Psi$ is is maximally supported on $q_1$. Otherwise, rewriting the above equation as: $$p_1 + (\alpha/\beta) q_1 +(\gamma/\beta) q_2 + (\delta/\beta) p_2 = \epsilon/\beta,$$ if $\beta \ne 0$ shows that $\Psi$ is maximally supported on $p_1$. If $\alpha = \beta = 0$ then both $p_1$ and $q_1$ are undetermined and so either representation produces a maximally supported state.
The same procedure can be performed to find if $q_2$ or $p_2$ produce a maximally supported state. As can be seen, we are really just repeating the same procedure as we did when there was only one qudit because the other degrees of freedom have no impact on this determination. Expressing $\Psi$ in the basis that is maximally supported in every degree of freedom means that it is therefore a Gaussian.
Therefore, it follows that every degree of freedom (corresponding to a qudit) is maximally supported in either the $p$- or $q$- basis and so Eq. \[eq:stabstatemixedrep\] always describes stabilizer states for odd prime $d$.[$\blacksquare$]{}
The form of Eq. \[eq:stabstatemixedrep\] is more general than Eq. \[eq:stabstateposrep\] because it does not depend on the support of the state. As we saw in the proof, this representation is generally not unique; for every qudit $i$ that is not a position or momentum state, $x_i$ can be either $p_i$ or $q_i$. However, if it is a position state then $x_i = p_i$ and if it is a momentum state then $x_i = q_i$; position and momentum states must be expressed in their conjugate representation in order to be captured by a Gaussian of the form in Eq. \[eq:stabstatemixedrep\] instead of Kronecker deltas.
The reason this mixed representation doesn’t hold for non-prime odd $d$ is that the coefficients above can be (multiples of) prime factors of $d$ and so no longer produce “lines” in $p_i$ or $q_i$ that cover all of $\mathbb Z/d \mathbb Z$. An alternative proof of this corollary that explores this case further is presented in the Appendix.
An example of the different classes of stabilizer states that are possible for odd prime $d$, in terms of their support, is shown in Fig. \[fig:quditstabstates\_dis7\]. There it can be seen that a stabilizer state is either maximally supported in $p_i$ or $q_i$, and is a Kronecker delta function in the other degree of freedom, or it is maximally supported in both.
![The two classes of stabilizer states possible for odd prime $d=7$ in terms of support: a) maximally supported in $p$ or $q$ and b) maximally supported in $p$ and $q$. The central grids denote the Wigner function ${{\left|\Psi\right\rangle}{\left\langle \Psi\right|}}_x(p,q)$ of a stabilizer state $\Psi$ with $d=7$. The projection of this state onto $p$-space is shown in the upper right ($|\Psi(p)|^2$) and the projection onto $q$-space is shown in the upper left ($|\Psi(q)|^2$).[]{data-label="fig:quditstabstates_dis7"}](quditnonmaxsupp_dis7.pdf)
On the other hand, for odd non-prime $d$, we see in Fig. \[fig:quditstabstates\_dis15\] that another class is possible: stabilizer states that are maximally supported in neither $p_i$ or $q_i$. In fact, rotating the basis in any of the discrete angles afforded by the grid still does not produce a basis that is maximally supported (as discussed in the Appendix). Notice also, that Fig. \[fig:quditstabstates\_dis15\]b shows that it is no longer true that a state that is maximally supported in only $q_i$ or $p_i$ is automatically a Kronecker delta when expressed in terms of the other.
![The four classes of stabilizer states possible for odd non-prime $d=15$ in terms of support: a) & b) maximally supported in $p$ or $q$, c) maximally supported in $p$ and $q$, and d) not maximally supported in $p$ or $q$. The central grids denote the Wigner function ${{\left|\Psi\right\rangle}{\left\langle \Psi\right|}}_x(p,q)$ for a stabilizer state $\Psi$ with $d=15$. The projection of this state onto $p$-space is shown in the upper right ($|\Psi(p)|^2$) and the projection onto $q$-space is shown in the upper left ($|\Psi(q)|^2$).[]{data-label="fig:quditstabstates_dis15"}](quditnonmaxsupp_2.pdf)
In summary, stabilizer states have Wigner function $\delta_{{\boldsymbol}\Phi \cdot {\boldsymbol}x, {\boldsymbol}r}$ and, for odd prime $d$, are Gaussians in mixed representation that lie on the Weyl phase space points $({\boldsymbol}x_p, {\boldsymbol}x_q)$. Heuristically, they correspond to Gaussians in the continuous case that spread along their major axes infinitely. The only reason that they aren’t always expressible as Gaussians in the mixed representation is that they sometimes “skip” over some of the discrete grid points due to the particular angle they lie along phase space for odd non-prime $d$.
Wigner functions $\Psi_x({\boldsymbol}x)$ of stabilizer states propagate under $\hat {V}$ as $\Psi_x({\boldsymbol}{\mathcal M}_{\hat {V}} ({\boldsymbol}x + {\boldsymbol}\alpha_{\hat {V}}/2) + {\boldsymbol}\alpha_{\hat {V}}/2)$, and this preserves the form of the state. In other words, Clifford gates take stabilizer states to other stabilizer states, as expected, just like in the continuous case Gaussians go to other Gaussians under harmonic evolution. It is also clear that stabilizer state propagation under Clifford gates can be expressed by a path integral at order $\hbar^0$.
Discrete Phase Space Representation of Universal Quantum Computing {#sec:discreteuniversalcomp}
==================================================================
A similar statement to the one we made in Section \[sec:contcenterchord\]—that any operator can be expressed as an infinite sum of path integral contribution truncated at order $\hbar^1$—can be made in discrete systems. However, there is an important difference in the number of terms making up the sum.
To see this we can follow reasoning that is similar to that employed in the continuous case. Namely, from Eq. \[eq:discretesupofreflections\] we see that any discrete operator can also be expressed as a linear combination of reflections, but unlike the continuous case, this sum has a finite number of terms. Since reflections can be expressed fully by the discrete path integral truncated at order $\hbar^1$, as discussed previously, it follows that any unitary operator in discrete systems can be expressed as a finite sum of contributions from path integrals truncated at order $\hbar^1$. Again, the same statement can be made by considering the chord representation in terms of translations.
Hence, quantum propagation in discrete systems can be fully treated by a *finite* sum of contributions from a path integral approach truncated at order $\hbar^1$.
To gather some understanding of this statement, we can consider what is necessary to add to our path integral formulation when we complete the Clifford gates with the T-gate, which produces a universal gate set.
The T-gate is generalized to odd $d$-dimensions by $$\hat {T} = \sum_{j \in \mathbb{Z}/d\mathbb{Z}} \omega^{\frac{(j-1)j}{4}} {{\left|j\right\rangle}{\left\langle j\right|}}.$$ This gate can no longer be characterized by an ${\boldsymbol}{\mathcal M}$ with integer entries. In particular, $${\boldsymbol}{\mathcal M}_{\hat {T}} = \left( \begin{array}{cc} 1 & \frac{1}{2} \\ 0 & 1 \end{array} \right),$$ and ${\boldsymbol}\alpha_{\hat {T}} = \left( -\frac{1}{4}, 0 \right)$. This corresponds to $${\boldsymbol}{\mathcal B}_{\hat {T}} = \left( \begin{array}{cc} 0 & 0\\ 0 & -\frac{1}{4}\end{array} \right).$$
Thus, the center function $$T_x(x_p,x_q) = e^{-\frac{2 \pi i}{d} \frac{1}{4} (x_q-x_q^2)},$$ corresponding to the phase shift Hamiltonian applied for only half the unit of time.
The operator can thus be written: $$\hat{T} = d^{-2} \sum_{\substack{x_p,x_q, \\ \xi_p,\xi_q \in \\ \mathbb{Z}/d\mathbb{Z}}} e^{ -\frac{2 \pi i}{d} \left[ \frac{1}{4} (x_q - x_q^2) - d \left(x_p \xi_q - x_q \xi_p \right) \right] } \hat {Z}^{\xi_p} \hat {X}^{\xi_q}.$$
Though this operator is quadratic, it no longer takes the Weyl center points to themselves. This means that the $\hbar^0$ limit of Eq. \[eq:discretesemiproplowestorder\] is now insufficient to capture all the dynamics because the overlap with any ${\left|q\right\rangle}$ will now involve a linear superposition of partially overlapping propagated manifolds. It must therefore be described by a path integral formulation that is complete to order $\hbar^1$. In particular, $$\label{eq:tgaterefl}
\hat{T} = d^{-1} \sum_{\substack{x_p, x_q \in \\ (\mathbb{Z} / d \mathbb{Z})}} e^{-\frac{2 \pi i}{d} \frac{1}{4} (x_q-x_q^2)} \hat {R}( x_p, x_q ),$$ where $\hat {R}$ should be substituted by its path integral.
Note that this does not imply efficient classical simulation of quantum computation but quite the opposite. Indeed, for $n$ qudits, there are $d^{2n}$ terms in the sum above. While every Weyl phase space point has only a single associated path when acted on by Clifford gates, this is no longer true in any calculation of evolution under the T-gate. Eq. \[eq:tgaterefl\] expresses the T-gate as a sum over phase space operators (the reflections) evaluated on all the phase space points. Thus, it can be interpreted as associating an exponentially large number of paths to every phase space point, instead of the single paths found for Clifford gates. Therefore, any simulation of the T-gate naively necessitates adding up an exponential large sum over paths and so is comparably inefficient.
Conclusion {#sec:conc}
==========
The treatment presented here formalizes the relationship between stabilizer states in the discrete case and Gaussians in the continuous case, which has often been pointed out [@Gross06]. Namely, only Gaussians that lie along Weyl phase space points directly correspond to Gaussians in the continuous world in terms of preserving their form under a harmonic Hamiltonian, an evolution that is fully describable by truncating the path integral at order $\hbar^0$. Furthermore, we showed that the Clifford group gates, generated by the Hadamard, phase shift and controlled-not gates, can be fully described by a truncation of their semiclassical propagator at lowest order. We found that this was because their Hamiltonians are harmonic and take Weyl phase space points to themselves. This proves the Gottesman-Knill theorem. The T-gate, needed to complete a universal set with the Hadamard, was shown not to satisfy these properties, and so requires a path integral treatment that is complete up to $\hbar^1$. The latter treatment includes a sum of terms for which the number of terms scales exponentially with the number of qudits.
We note that our observations pertaining to classical propagation in continuous systems have long been very well known. In the continuous case, the Wigner function of a quantum state is non-negative if and only if the state is a Gaussian [@Hudson74] and it has also long been known that quantum propagation from one Gaussian state to another only requires propagation up to order $\hbar^0$ [@Heller75]. Indeed, it has been shown that this is a continuous version of “stabilizer state propagation” in finite systems [@Barnes04], and is therefore, in principle, useful for quantum error correction and cluster state quantum computation [@Nielsen06; @Braunstein12]. It is also well known in the discrete case that quadratic Hamiltonians can act classically and be represented by symplectic transformations in the study of quantum cat maps [@Hannay80; @Rivas99; @Rivas00] and linear transformations between propagated Wigner functions [@Bianucci02]. Interestingly though, this latter work appears to have predated the discovery that stabilizer states have positive-definite Wigner functions [@Gross06] and therefore, as far as we know, has not been directly related to stabilizer states and the $\hbar^0$ limit of their path integral formulation, which is a relatively recent topic of particular interest to the quantum information community and those familiar with Gottesman-Knill. Otherwise, this claim has been pointed out in terms of concepts related to positivity and related concepts in past work [@Cormick06; @Mari12].
We also note that our exploration of continous systems is not meant to explore the highly related topic of continuous-variable quantum information. Many topics therein apply to our discussion here, such as the continuous stabilizer state propagation we mentioned above. However, our intention in introducing the continuous infinite-dimensional case was not to address these topics but to instead relate the established continuous semiclassical formalism to the discrete case, and thereby bridge the notions of phase space and dynamics between the two worlds.
There is an interesting observation to be made of the weights of the reflections that make up the complete path integral formulation of a unitary operator. Namely, as is clear in Eqs. \[eq:contsupofreflections\] and \[eq:discretesupofreflections\], the coefficients consist of the exponentiated center generating function multiplied by $\frac{i}{\hbar}$. This is very similar to the form of the vVMG path integral in Eqs. \[eq:contcenterrepvVMG\] and \[eq:discretecenterrepvVMG\]. However, in Eqs. \[eq:contsupofreflections\] and \[eq:discretesupofreflections\], reflections serve as the prefactors measuring the reflection spectral overlap of a propagated state with its evolute and the center generating actions provide the quantal phase. Thus, this formulation can be interpreted as an alternative path integral formulation of the vVMG, one consisting of reflections as the underlying classical trajectory only, instead of the more tailored trajectories that result from applying the method of steepest descents directly on an operator.
The fact that any unitary operator in the discrete case can be expressed as a sum consisting of a finite number of order $\hbar^1$ path integral contributions, has the added interesting implication that uniformization—higher order $\hbar$ corrections to the “primitive” semiclassical forms such as Eq. \[eq:contcenterrepvVMG\]—isn’t really necessary in discrete systems. Uniformization is characterized by the proper treatment of coalescing saddle points and has long been a subject of interest in continuous systems where “anharmonicity” bedevils computationally efficient implementation. It seems that this problem isn’t an issue in the discrete case since a fully complete sum with a finite number of terms, naively numbering $d^2$ for one qudit, exists.
As a last point, there is perhaps an alternative way to interpret the results presented here, one in terms of “resources”. Much like “magic” (or contextuality) and quantum discord can be framed as a resource necessary to perform quantum operations that have more power than classical ones, it is possible to frame the order in $\hbar$ that is necessary in the underlying path integral describing an operation as a resource necessary for quantumness. In this vein, it can be said that Clifford gate operations on stabilizer states are operations that only require $\hbar^0$ resources while supplemental gates that push the operator space into universal quantum computing require $\hbar^1$ resources. The dividing line between these two regimes, the classical and quantum world, is discrete, unambiguous and well-defined.
Acknowledgments
===============
The authors thank Prof. Alfredo Ozorio de Almeida for very fruitful discussions about the center-chord representation in discrete systems and Byron Drury for his help proof-reading and bringing [@Penney16] to our attention. This work was supported by AFOSR award no. FA9550-12-1-0046.
Appendix {#sec:appendix}
========
Gross proved that for odd prime $d$ [@Gross07]:
Let $\Psi$ be a state vector with positive Wigner function for odd prime $d$. If $\Psi$ is supported on two points, then it has maximal support.
With this lemma in mind, we can offer an alternative proof of Corollary \[cor:stabstatemixedrep\]:
For odd prime $d$, if $\Psi$ is a stabilizer state then there always exists a mixed representation in position and momentum such that: $$\Psi_{\theta_{\beta {\boldsymbol}x},\eta_{\beta {\boldsymbol}x}}({\boldsymbol}x) = \frac{1}{\sqrt{d}} \exp \left[\frac{2 \pi i}{d} \left( {\boldsymbol}x^T {\boldsymbol}\theta_{\beta {\boldsymbol}x} {\boldsymbol}x + {\boldsymbol}\eta_{\beta {\boldsymbol}x} \cdot {\boldsymbol}x \right) \right],$$ where $x_i$ can be either $p_i$ or $q_i$.
We will show that for odd prime $d$, every degree of freedom can only be fully supported or a Kronecker delta, for all other degrees of freedom fixed; WLOG we will consider a two-dimensional stabilizer state $\Psi(q_1,q_2)$ and show that if $\exists\, q'_1$ such that $\Psi(q'_1, q_2) \ne 0$ $\forall q_2$ then $\Psi(q_1,q_2)\ne 0$ $\forall\, q_1, q_2$ and vice-versa (if $\exists\, q'_1$ s.t. $\Psi(q'_1, q_2)$ is a delta function then $\Psi(q_1,q_2)$ is a delta function in $q_2$ $\forall q_1$). Therefore, if a degree of freedom is maximally supported in one degree of freedom for all others fixed, then it is maximally supported for all values of the other degrees of freedom. On the other hand, if it is a delta function in one degree of freedom for all others fixed, then it is a delta function for all values of the other degrees of freedom.
Assume that for $q_1, q_2 \in \{0, \ldots, d-1\}$, $\exists\, q'_1, q''_1$ such that $\Psi(q'_1,q_2) = 0$ for some $q_2$ and $\Psi(q''_1, q_2) \ne 0$ $\forall q_2$. We proceed to prove by contradiction.
Hence $\Psi(q''_1, q_2) \equiv \Psi_{q''_1} \propto \left[ \frac{2 \pi i}{d} \left( \theta_{q''_1} q^2_2 + \eta_{q''_1} q_2 \right) \right]$ by Corollary \[cor:stabstates\] and $\Psi(q'_1,q_2) \equiv \Psi_{q'_1}(q_2) \propto \delta_{q_2, q(q'_1)}$ for some $q(q'_1)\in\mathbb{Z}/d\mathbb{Z}$ by [@Gross06].
We can rotate in $p_2$-$q_2$ space to form a new basis $q^*_2$ in $d$ discrete angles (since $\theta_{q''_1} \in \mathbb{Z}/d\mathbb{Z}$) such that $\Psi_{q'_1}(q^*_2) \ne 0$ (since a delta function is not maximally supported only at the one angle perpendicular to it). Since there exists $(d-1)$ other values of $q_1$ other than $q'_1$, it follows that there exists at least one such angle such that $\Psi_{q_1}(q^*_2) \ne 0$ $\forall q_1, q^*_2 \in \{0,\ldots,d-1\}$. We define $q^*_2$ as the basis that is rotated by this angle with respect to $q_2$.
By Corollary \[cor:stabstates\], this means that $$\Psi^*(q_1,q^*_2) \propto \exp( \theta'_{11} q^2_1 + \theta'_{22} {q^*_2}^2 + 2 \theta'_{12} q_1 q^*_2 + \eta'_1 q_1 + \eta'_2 q_2),$$ where by $\Psi^*$ we mean $\Psi$ expressed in the new basis $q^*_2$ in its second degree of freedom. Hence, $$\label{eq:rotatedgaussianbasis}
\Psi^*_{q_1}(q^*_2) \propto \exp \left[\frac{2 \pi i}{d} \left( \theta'_{q_1} {q^*_2}^2 + \eta'_{q_1} q^*_2 \right) \right] \exp \left[ \frac{2 \pi i}{d} \theta_{12} q_1 q^*_2 \right].$$
Acting on this last equation to rotate back to $q_2$, we must produce $\Psi_{q''_1}(q_2) \propto \delta_{q_2,q(q''_1)}$. But then Eq. \[eq:rotatedgaussianbasis\] implies that $\Psi_{q'_1}(q^*_2)$ must also be proportional to $\delta_{q_2,q(q'_1)}$. This is a contradiction.
Therefore, if a degree of freedom is maximally supported for all others fixed, then it is maximally supported for all values of the other degrees of freedom and vice-versa. In the latter case, a position state in the $i$th degree of freedom can be represented as a Gaussian by using the $p$-basis where it becomes a plane wave ($\theta_i = 0$). In other words, one can always choose $x_i$ to be $p_i$ or $q_i$ such that Eq. \[eq:stabstatemixedrep\] holds for odd prime $d$. [$\blacksquare$]{}
Finally, the reason this result does not hold for odd non-prime $d$ is that for discrete Wigner space there are $d + 1$ unique angles minus all the prime factors of $d$. For non-prime $d$, there is more than one such prime factor and so there are cases when one cannot “rotate” away all non-maximally supported states.
[^1]: See Ehrenfest’s Theorem, e.g. in [@gottfried13]
[^2]: See Eq. $5.18$ in [@Rivas99] for a definition of the phase.
|
---
abstract: 'We prove that the weakly singular, non-linear convolution integral equation $\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy=f(x)^{p-1}$, where $0<\lambda<n$, and $p=2n/(2n-\lambda)$ has at least two non-equivalent solutions. This answers a problem of Elliott Lieb. We also prove certain orthogonality relations among linear differential forms with constant coefficients related to the corresponding type of convolution operators. Finally, we discuss the regularity of the solutions of such non-linear integral equations over not necessarily bounded open subsets of $\mathbb{R}^n$.'
author:
- Ronen Peretz
title: On an integral equation of Lieb
---
Introduction of the results
===========================
We divide the results in this paper into three parts. In [**the first part**]{} we note that a certain non-linear integral equation introduced by Elliott Lieb has at least two essentially different solutions. One of the solutions has an isolated singular point where the function tends to infinity.
[**The second part**]{} presents a multitude of integral identities each connecting two solutions of Lieb’s equation. These integral identities contain the images of the two solutions under linear differential forms with constant coefficients. They point out to two properties connecting any two such solutions. One property is an orthogonality property and the second property is a certain commutativity.
Finally, in [**the third part**]{} we present regularity results of the solutions to the Lieb integral equation. The bottom line is that except for isolated singularities were the solutions tend to infinity, they are, in fact smooth functions elsewhere. One can compute effectively their smoothness degree.
[**(I)**]{} In [@Lieb] Elliott H. Lieb computes the sharp constants for certain parameter values in the Hardy-Littlewood-Sobolev inequality. The paper also deals with related inequalities: the Sobolev inequality, doubly weighted Hardy-Littlewood-Sobolev inequality and the weighted Young inequality (as A. Sokal called it). Theorem 3.1 on page 359 computes the unique maximizing function and the sharp constant for certain values of the parameters of the first inequality. The maximizing functions should satisfy the following integral equation $$\label{eq1}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}f(y)dy=f(x)^{p-1},\,\,\,0<\lambda<n,\,\,\,p=\frac{2n}{2n-\lambda}.$$ However, as claimed on page 361 of [@Lieb] “We do not that (3.9)” (the above (\[eq1\]) equation) “has an (essentially) unique solution-even if we restrict to the SSD category-and we shall offer no proof of this kind of uniqueness. [*This is an open problem!*]{}”. Indeed there is no uniqueness for we have the following:
The following function is a solution of equation (\[eq1\]): $$\label{eq2}
f(x)=C(n,\lambda)|x|^{-(n-\lambda/2)},$$ where: $$\label{eq3}
C(n,\lambda)=$$ $$=\left\{\pi^{n/2}\left(\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\Gamma\left(\frac{\lambda}{4}\right)^2\right)\left/
\left(\Gamma\left(\frac{\lambda}{2}\right)\Gamma\left(\frac{n}{2}-\frac{\lambda}{4}\right)^2\right)\right.\right\}^{-(2n-\lambda)/(2(n-\lambda))}.$$
Lieb proved that his equation (\[eq1\]) has the following solution (which is unique among the maximizing functions of the inequality he was considering): $$\label{eq4}
f_L(x)=L(n,\lambda)(1+|x|^2)^{-n/p}=L(n,\lambda)(1+|x|^2)^{-(n-\lambda/2)}.$$ Here $L(n,\lambda)$ is a constant depending on the dimension and on the parameter $\lambda$. We note that the solution in Theorem 1.1 is singular at the origin and is not a reflection of Lieb’s solution. It is certainly not a conformal image of it ($f_L$ is non-singular and bounded). Thus Lieb’s equation (\[eq1\]) exhibits at least two non-equivalent solutions.
Based on the coming results we can deduce many integral identities. Here is an example:
$$L(n,\lambda)^{\lambda/(2n-\lambda)}C(n,\lambda)\int_{\mathbb{R}^n}|x|^{-(n-\lambda/2)}(1+|x|^2)^{-\lambda/2}dx=$$ $$\label{eq5}$$ $$=L(n,\lambda)C(n,\lambda)^{\lambda/(2n-\lambda)}\int_{\mathbb{R}^n}|x|^{-\lambda/2}(1+|x|^2)^{-(n-\lambda/2)}dx.$$
We note the symmetric relations between the two solutions within the last identity. It is a kind of commutativity that takes the power $p-1$ into an account.
[**(II)**]{} The identity in the last corollary is the zero’th integral identity out of infinitely many possible integral identities that connect two solutions of the Lieb integral equation (\[eq1\]).
For $\alpha=(\alpha_1,\ldots,\alpha_n)\in (\mathbb{Z}^+\cup\{0\})^n$ we denote: $$D_{\alpha}^{(x)}(h(x))=\frac{\partial^{|\alpha|}h(x)}{\partial x_1^{\alpha_1}\ldots\partial x_n^{\alpha_n}},\,\,\Lambda^{(x)}=\sum_{\alpha} a_{\alpha}
D_{\alpha}^{(x)}\,\,\,{\rm where}\,\,a_{\alpha}\in\mathbb{R}.$$
Here are some integral relations between solutions of equation (\[eq1\]). Obvious generalizations hold true for other similar kernels. These include some orthogonality relations and some commutativity relations:
Let $f(x)$ and $g(x)$ be solutions of equation (\[eq1\]). Then assuming the convergence of the integrals below, $\forall\,\alpha,\beta\in(\mathbb{Z}^+\cup\{0\})^n$ we have: $$\label{eq6}
\int_{\mathbb{R}^n}D_{\beta}^{(x)}(g(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(g(x)^{p-1})dx,$$ $$(-1)^{|\beta|}\int_{\mathbb{R}^n}D_{\beta}^{(x)}(f(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=$$ $$\label{eq7}$$ $$=(-1)^{|\alpha|}\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(f(x)^{p-1})dx,$$ $$(-1)^{|\alpha|+|\beta|}=-1$$ $$\label{eq8}
\Downarrow$$ $$\int_{\mathbb{R}^n}D_{\beta}^{(x)}(f(x))D_{\alpha}^{(x)}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}(f(x))D_{\beta}^{(x)}(f(x)^{p-1})dx=0,$$
So it is natural to make the following:
$$E=\{\sum_{\alpha\in I}a_{\alpha}D_{\alpha}^{(x)}(\cdot)\,|\,\forall\,\alpha\in I,\,(-1)^{|\alpha|}=1,\,a_{\alpha}\in\mathbb{R}\},$$ $$O=\{\sum_{\beta\in J}b_{\beta}D_{\beta}^{(x)}(\cdot)\,|\,\forall\,\beta\in J,\,(-1)^{|\beta|}=-1,\,b_{\beta}\in\mathbb{R}\}.$$
Let $f(x)$ and $g(x)$ be solutions of equation (\[eq1\]). Suppose that $\Lambda=\Lambda_e+\Lambda_o$, $\Omega=\Omega_e+\Omega_o$, where $\Lambda_e,\Omega_e\in E$, and $\Lambda_o,\Omega_o\in O$. Then:\
(a) Assuming convergence of the integrals below we have: $$\label{eq9}$$ $$\int_{\mathbb{R}^n}\Lambda(f)\Omega(g^{p-1})dx=\int_{\mathbb{R}^n}\Lambda(f^{p-1})\Omega(g)dx,$$ $$\int_{\mathbb{R}^n}\Lambda_{e}(f(x))\Lambda_{o}(f(x)^{p-1})dx=\int_{\mathbb{R}^n}\Lambda_{e}(f(x)^{p-1})\Lambda_{o}(f(x))dx=0.$$ (b) Assuming the convergence of the integrals below we have: $$\label{eq10}$$ $$\int_{\mathbb{R}^n}\Lambda(f)\Omega(f^{p-1})dx=\int_{\mathbb{R}^n}\Lambda(f^{p-1})\Omega(f)dx=$$ $$\int_{\mathbb{R}^n}\Lambda_e(f)\Omega_e(f^{p-1})dx+\int_{\mathbb{R}^n}\Lambda_o(f)\Omega_o(f^{p-1})dx=$$ $$=\int_{\mathbb{R}^n}\Lambda_e(f^{p-1})\Omega_e(f)dx+\int_{\mathbb{R}^n}\Lambda_o(f^{p-1})\Omega_o(f)dx.$$
[**(III)**]{} We turn our attention to the smoothness of the solutions of Lieb’s equation (\[eq1\]). We recall some notations and definitions from [@Vainikko]. Let $G\subseteq\mathbb{R}^n$ be an open and bounded set. For a $\lambda\in\mathbb{R}^n$, G. Vainniko introduces a weight function: $$w_{\lambda}(x)=\left\{\begin{array}{lll} 1 & {\rm for} & \lambda<0 \\ (1+|\log\rho(x)|)^{-1} & {\rm for} & \lambda=0 \\
\rho(x)^{\lambda} & {\rm for} & \lambda>0 \end{array}\right.,\,\,\,x\in G,$$ where $\rho(x)=\inf_{y\in\partial G} |x-y|$ is the distance from $x$ to the boundary $\partial G$ of $G$. Let $m\in\mathbb{Z}^+\cup\{0\}$, $\nu\in\mathbb{R}$ satisfy $\nu<n$. We define the space $C^{m,\nu}(G)$ as the collection of all $m$ times continuously differentiable functions $u:\,G\rightarrow\mathbb{R}$ (or $\mathbb{C}$) such that: $$||u||_{m,\nu}=\sum_{|\alpha|\le m}\sup_{x\in G}(w_{|\alpha|-(n-\nu)}(x)|D_{\alpha} u(x)|)<\infty.$$ So $C^{m,\nu}(G)$ contains all the $m$ times continuously differentiable functions $u$ on $G$ whose derivatives near the boundary $\partial G$ can be estimated as follows: $$|D_{\alpha}u(x)|\le\,{\rm Const.}\left\{\begin{array}{lll} 1 & {\rm for} & |\alpha|<n-\nu \\ 1+|\log\rho(x)| & {\rm for} & |\alpha|=n-\nu \\
\rho(x)^{n-\nu-|\alpha|} & {\rm for} & |\alpha|>n-\nu \end{array}\right.,\,\,\,x\in G,\,\,|\alpha|\le m.$$
1\) The function $||\cdot ||_{m,\nu}$ on the space $C^{m,\nu}(G)$ is a norm.\
2) The space $(C^{m,\nu}(G),||\cdot ||_{m,\nu})$ is complete, i.e. it is a Banach space.
Consider the following integral equation: $$\label{eq11}
u(x)=\int_G K(x,y,u(y))dy+f(x),\,\,\,x\in G.$$ we assume that the kernel $K(x,y,u)$ is $m$ times ($m\ge 1$) continuously differentiable with respect to $x,y,u$ for $x\in G$, $y\in G$, $x\ne y$, $u\in\mathbb{R}$. We also assume that there is a real number $\nu<n$, such that, $\forall\,k\in\mathbb{Z}^+$ and $\alpha,\beta\in(\mathbb{Z}^+)^n$ for which $k+|\alpha|+|\beta|\le m$, the following inequalities hold: $$\label{eq12}$$ $$D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u)\le b_1(u)\left\{\begin{array}{lll} 1 & {\rm for} & \nu+|\alpha |<0 \\
1+|\log |x-y|| & {\rm for} & \nu+|\alpha |=0 \\ |x-y|^{-\nu-|\alpha |} & {\rm for} & \nu+|\alpha |>0 \end{array}\right.,$$ $$\label{eq13}$$ $$|D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u_1)-D_{\alpha}^x D_{\beta}^{x+y}\frac{\partial^k}{\partial u^k}K(x,y,u_2)|\le$$ $$\le b_2(u_1,u_2)|u_1-u_2|\left\{\begin{array}{lll} 1 & {\rm for} & \nu+|\alpha |<0 \\
1+|\log |x-y|| & {\rm for} & \nu+|\alpha |=0 \\ |x-y|^{-\nu-|\alpha |} & {\rm for} & \nu+|\alpha |>0 \end{array}\right..$$ The functions $b_1:\,\mathbb{R}\rightarrow\mathbb{R}^+$ and $b_2:\,\mathbb{R}^2\rightarrow\mathbb{R}^+$ are assumed to be bounded on every bounded region of $\mathbb{R}$ and $\mathbb{R}^2$ respectively. The notation of G. Vainikko reads as follows: $$\label{eq14}$$ $$D_{\alpha}^x D_{\beta}^{x+y}=\left(\frac{\partial}{\partial x_1}\right)^{\alpha_1}\ldots\left(\frac{\partial}{\partial x_n}\right)^{\alpha_n}
\left(\frac{\partial}{\partial x_1}+\frac{\partial}{\partial y_1}\right)^{\beta_1}\ldots \left(\frac{\partial}{\partial x_n}+\frac{\partial}{\partial y_n}
\right)^{\beta_n}.$$ We can now conveniently quote the result we need from [@Vainikko]:\
\
[**Theorem 8.1.([@Vainikko])**]{} [*Let $f\in C^{m,\nu}(G)$ (in (\[eq11\])) and let the kernel $K(x,y,u)$ satisfy inequalities (\[eq12\]) and (\[eq13\]). If the integral equation (\[eq11\]) has a solution $u\in L^{\infty}(G)$, then $u\in C^{m,\nu}(G)$.*]{}\
\
1\) There is a companion result (Theorem 8.2) in [@Vainikko], but we will not use it here.\
2) We note that if we restrict Lieb’s integral equation (\[eq1\]) to a bounded open $G\subseteq\mathbb{R}^n$, then it satisfies the assumptions of Theorem 8.1. in [@Vainikko]. In this case $f(x)\equiv 0\in C^{\infty}(G)$ and $K(x,y,u)=|x-y|^{-\lambda}$ is independent of $u$. Also we note that: $$\left(\frac{\partial}{\partial x_i}+\frac{\partial}{\partial y_i}\right)|x-y|^{-\lambda}\equiv 0,$$ and so in inequalities (\[eq12\]) and (\[eq13\]) the only interesting values of $(k,|\alpha|,|\beta|)$ are $k=0$, $|\beta|=0$, $|\alpha|\le m$ and within the inequalities we start with $\nu+|\alpha|=\lambda$. Thus we can make effective calculations of the smoothness degree of a bounded solution of the restriction to $G$ of Leib’s integral equation (\[eq1\]).\
3) We note that any solution $f(x)$ of the Lieb integral equation (\[eq1\]), must decay to zero at infinity, in order for the integral $$\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy,\,\,\,\,0<\lambda<n,$$ to converge at infinity. Thus any solution is bounded outside a ball $B_n(R)=\{x\in\mathbb{R}^n\,|\,|x|<R\}$ for a large enough radius $R$. Thus such a solution can have singularities only within the ball $B_n(R)$, and the solution tends to infinity at each such a singularity, otherwise by Theorem 8.1 in [@Vainikko] it will be smooth (of some degree) at such a point.
We thus have the following:
Let $f(x)$ be a solution of Lieb’s integral equation (\[eq1\]). Then $\lim_{|x|\rightarrow\infty}f(x)=0$, and there is a ball $B_n(R)$ such that $f(x)$ is bounded for $x\not\in B_n(R)$, $f(x)$ can have a finite set of singularities inside $B_n(R)$, and $f(x)$ tends to infinity when $x\rightarrow s$ for each singular point $s$ of $f(x)$.
$\qed$
Lieb’s integral equation has at least two non-equivalent symmetric decreasing solutions, Theorem 1.1
====================================================================================================
[**A proof.**]{} We use the following formula for the Fourier transform, [@SteinWeiss]: $$\widehat{|y|^{-\nu}}=\int_{\mathbb{R}^n}|y|^{-\nu}\exp(-2\pi ix\cdot y)dy=\left\{\pi^{\nu-(n/2)}\Gamma\left(\frac{n}{2}-\frac{\nu}{2}\right)\left/
\Gamma\left(\frac{\nu}{2}\right)\right.\right\}|x|^{\nu-n},$$ where $0<\nu<n$. We compute the Fourier transform of our integral: $$\widehat{\left(\int_{\mathbb{R}^n}|t-y|^{-\lambda}f(y)dy\right)(x)}=\widehat{\left(\int_{\mathbb{R}^n}|t-y|^{-\lambda}\left(C(n,\lambda)|y|^{-(n-\lambda/2)}\right)dy\right)(x)}=$$ $$=C(n,\lambda)\left(\int_{\mathbb{R}^n}|y|^{-\lambda}\exp\left(ix\cdot y\right)dy\right)\left(\int_{\mathbb{R}^n}|y|^{-(n-\lambda/2)}
\exp\left(ix\cdot y\right)dy\right)=$$ $$=C(n,\lambda)\left\{\pi^{\lambda-n/2}\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\left/\Gamma\left(\frac{\lambda}{2}\right)\right.\right\}
|x|^{\lambda-n}\times$$ $$\times\left\{\pi^{(n-\lambda/2)-n/2}\Gamma\left(\frac{n}{2}-\frac{(n-\lambda)}{2}\right)\left/\Gamma\left(\frac{n-\lambda}{2}\right)\right.\right\}
|x|^{n-(\lambda/2)-n}=$$ $$=C(n,\lambda)\left\{\pi^{n/2}\left(\Gamma\left(\frac{n}{2}-\frac{\lambda}{2}\right)\Gamma\left(\frac{\lambda}{4}\right)^2\right)\left/
\left(\Gamma\left(\frac{\lambda}{2}\right)\Gamma\left(\frac{n}{2}-\frac{\lambda}{4}\right)^2\right)\right.\right\}\widehat{|y|^{-\lambda/2}}(x)=$$ $$=C(n,\lambda)^{\lambda/(2n-\lambda)}\left(\widehat{|y|^{-(n-\lambda/2)(\lambda/(2n-\lambda)}}\right)(x)=$$ $$=\left(\widehat{\left(C(n,\lambda)|y|^{-(n-\lambda/2)}\right)^{\lambda/(2n-\lambda)}}\right)(x)=$$ $$=\left(\widehat{f\left(y\right)^{\lambda/(2n-\lambda)}}\right)(x)=\left(\widehat{f\left(y\right)^{p-1}}\right)(x).$$ Hence: $$\int_{\mathbb{R}^n}|x-y|^{-\lambda}f(y)dy=f(x)^{p-1}.$$ $\qed $\
\
Integral relations between two solutions of equation (\[eq1\]), orthogonality and commutativity, Corollary 1.3, Theorem 1.5 and Theorem 1.7
===========================================================================================================================================
In this section we are interested in the form of integral relations between two solutions of the Lieb integral equations. Thus we will not bother with convergence issues and just formally expand the formulas in Corollary 1.3, in Theorem 1.5 and in Theorem 1.7. We note that the identity in Corollary 1.3 follows by Theorem 1.1 and by the case $\alpha=\beta=\overline{0}$ in equation (\[eq6\]) of Theorem 1.5. Also Theorem 1.7 follows by Theorem 1.5. Hence we need to prove only Theorem 1.5:\
We start with Lieb’s integral equation, (\[eq1\]) and perform on it a partial differentiation with respect to $x_j$. We formally differentiate under the integral sign. The result is: $$\int_{\mathbb{R}^n}\frac{\partial}{\partial x_j}\left(\left|x-y\right|^{-\lambda}\right)f(y)dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.$$ We note that: $$\frac{\partial}{\partial x_j}\left(\left|x-y\right|^{-\lambda}\right)=-\frac{\partial}{\partial y_j}\left(\left|x-y\right|^{-\lambda}\right).$$ Hence: $$-\int_{\mathbb{R}^n}\frac{\partial}{\partial y_j}\left(\left|x-y\right|^{-\lambda}\right)f(y)dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.$$ By integration by parts we deduce the following: $$\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\frac{\partial f(y)}{\partial y_j}dy=\frac{\partial f(x)^{p-1}}{\partial x_j}.$$ We iterate this argument and obtain: $$\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\frac{\partial^2 f(y)}{\partial y_k\partial y_j}dy=\frac{\partial^2 f(x)^{p-1}}
{\partial x_k\partial x_j}.$$ Now an inductive argument implies: $$\label{eq15}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\alpha}^{(y)}\left(f(y)\right)dy=D_{\alpha}^{(x)}\left(f(x)^{p-1}\right).$$ Another outer induction gives, finally: $$\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}\Lambda\left(f(y)\right)dy=\Lambda\left(f(x)^{p-1}\right).$$ Next, let $g(x)$ be one more solution of equation (\[eq1\]), i.e.: $$\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}g(y)dy=g(x)^{p-1}.$$ By what we have already done, we have: $$\label{eq16}
\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\beta}^{(y)}\left(g(y)\right)dy=D_{\beta}^{(x)}\left(g(x)^{p-1}\right).$$ We now use the double integration technique. We multiply equation (\[eq15\]) by $D_{\beta}^{(x)}(g(x))$ and integrate $\int_{\mathbb{R}^n}\ldots dx$: $$\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\alpha}^{(y)}\left(f(y)\right)dydx=
\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx.$$ Reversing the order of integration on the left hand side gives: $$\int_{\mathbb{R}^n}D_{\alpha}^{(y)}\left(f(y)\right)\int_{\mathbb{R}^n}\left|x-y\right|^{-\lambda}D_{\beta}^{(x)}\left(g(x)\right)dxdy=
\int_{\mathbb{R}^n}D_{\alpha}^{(y)}\left(f(y)\right)D_{\beta}^{(y)}\left(g(y)^{p-1}\right)dy.$$ Changing on the right hand side the name of the variable from $y$ to $x$, gives: $$\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(g(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(g(x)^{p-1}\right)dx.$$ This proves equation (\[eq6\]). This is the commutativity part. We now prove orthogonality. We start with equation (\[eq6\]) in the special case $f(x)=g(x)$. We get: $$\int_{\mathbb{R}^n}D_{\beta}^{(x)}\left(f(x)\right)D_{\alpha}^{(x)}\left(f(x)^{p-1}\right)dx=\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(f(x)^{p-1}\right)dx.$$ Integration by parts gives: $$\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)\right)D_{\beta}^{(x)}\left(f(x)^{p-1}\right)dx=(-1)^{|\alpha|}\int_{\mathbb{R}^n}f(x)\cdot D_{\alpha+\beta}^{(x)}
\left(f(x)\right)dx,$$ while $$\int_{\mathbb{R}^n}D_{\alpha}^{(x)}
\left(f(x)^{p-1}\right)D_{\beta}^{(x)}\left(f(x)\right)dx=(-1)^{|\beta|}\int_{\mathbb{R}^n}f(x)\cdot D_{\alpha+\beta}^{(x)}
\left(f(x)\right)dx.$$ We proved equation (\[eq7\]). Equation (\[eq8\]) is a consequence of equation (\[eq6\]) and equation (\[eq7\]). This proves Corollary 1.3, Theorem 1.5 and Theorem 1.7. $\qed $
[3]{}
Elliott H. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, [*Annals of Mathematics*]{}, [**118**]{} (1983), 349-374.
Stein, E. M., and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, New Jersey, 1971.
Gennadi Vainikko, Multidimensional Weakly Singular Integral Equations, Lecture Notes in Mathematics [**1549**]{}, Springer-Verlag, Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest, 1993.
[*Ronen Peretz\
Department of Mathematics\
Ben Gurion University of the Negev\
Beer-Sheva , 84105\
Israel\
E-mail: ronenp@math.bgu.ac.il*]{}\
|
A basic problem in condensed matter physics is to find the ground state of a given Hamiltonian embodying the interactions of a many body system. A prototype of such a system is the Heisenberg spin 1/2 chain, described by the following Hamiltonian $$\begin{aligned}
\label{H}
H &=&\sum_{i=1}^N J_x \sigma_{x,i}\sigma_{x,{i+1}}+J_y \sigma_{y,i}\sigma_{y,i+1}\cr
&+&J_z \sigma_{z,i}\sigma_{z,i+1}+B \sigma_{x,i},\end{aligned}$$ where, $\sigma_{a}$’s are the Pauli operators, $J_a$’s are the coupling strengths and $B$ is the external magnetic field. There is a long history of attempts for finding exact solutions of this model at some special points or lines in this parameter space [@korepin]. The matrix product formalism, originally introduced and developed in [@aklt; @mpsbasic], has been recently revived [@mpscirac; @ver1] mainly due to the work in quantum information community, where the emphasis is on the properties of many body states, like their entanglement or quantum correlations [@fan; @osterloh; @occoner]. In this formalism, one first constructs a many body state and then finds the Hamiltonian for which this state is an exact ground state. As is always the case, when we reverse a difficult problem (in this case finding the ground state of an interacting spin system), the difficulty shows up in some other form in some other place: except for very rare cases, [@aklt] the Hamiltonians which are found are not usually simple and of wide interest to condensed matter physicists [@stochastic]. In this letter, we show for the first time that the Heisenberg spin 1/2 chain can be solved exactly and in compact form on a two dimensional surfaces defined by $$\begin{aligned}
\label{BJ}
J_x &=&-J+\frac{1+g^2}{2} ,\ \ J_y=-\eta J+g, \cr \ J_z&=&-\eta
J-g \ \ \ \ , \ \ \ \ B=\epsilon(g^2-1),\end{aligned}$$ in which $(\epsilon,\eta)=\pm (1,\pm 1)$ are two discrete parameters, and $g$ and ($J>0$) are two continuous parameters. Unlike the ${\rm xxx}$ or ${\rm xxz}$ Heisenberg anti-ferromagnetic chains whose solutions are implicitly given via the solution of Bethe ansatz equations, the ground states of these models can be determined quite explicitly and expressed in terms of simple functions. Yet as we will see these ground states are quite rich in their properties. We will calculate the spin correlation functions exactly and show that singularities in the thermodynamic limit develop at $g=0$, a property which has been called MPS-Quantum Phase Transition in [@mpscirac], to distinguish them from known examples of QPT’s [@sachdev].\
We will also show that, these ground state have the very interesting property that all the pairs of spins have equal entanglement with each other. This is a very desirable situation for quantum information processing, both theoretically and experimentally, i.e. an array of qubits in which there are long range entanglement, figure (\[ring\]).
=7.5truecm
Let us briefly review the MPS formalism. On a ring of $N$ sites of $d-$level particles, a state is called a matrix product state if there exist matrices $A_i, i=0,\cdots, d-1$ (of dimension $D$) such that $$\label{mat}
\psi_{i_1,i_2,\cdots
,i_N}=\frac{1}{\sqrt{Z}}tr(A_{i_1}A_{i_2}\cdots A_{i_N}),$$ where $Z$ is a normalization constant given by $
Z=tr(E^N)
$ in which $ E:=\sum_{i=0}^{d-1} (A_i^*\otimes A_i). $ The state (\[mat\]) is reflection symmetric if there exists a matrix $\Pi$ such that $A_i^T=\Pi A_i\Pi^{-1}$ (where $T$ means transpose) and time-reversal invariant if there exists a matrix $V$ such that $A_i^*=VA_iV^{-1}$. All the correlation functions can be calculated exactly. For example, for a local observable $O$, one finds $$\label{1point}
\la \Psi|O(k)|\Psi\ra = \frac{tr(E^{k-1}E_O E^{N-k})}{tr(E^N)},$$ where $E_O := \sum_{i,j=0}^{d-1}\la i|O|j\ra A_i^*\otimes A_j.$ In the thermodynamic limit ($N\rightarrow\infty$), only the eigenvector(s) corresponding to the eigenvalue $\lambda_{max}$ of $E$ with the largest absolute value matters and any level-crossing in this eigenvalue leads to a discontinuity in correlation functions.\
Given a matrix product state, the reduced density matrix of $k$ adjacent sites is given by $$\rho_{i_1\cdots i_k,j_1\cdots j_k}=\frac{tr((A_{i_1}^*\cdots A_{i_k}^*\otimes A_{j_1}\cdots A_{j_k})E^{N-k})}{tr(E^N)}.$$ This density matrix has at least $d^k-D^2$ zero eigenvalues. To see this, suppose that we can find complex numbers $c_{i_1\cdots i_k}$ such that $$\label{ccA}
\sum_{j_1,\cdots, j_k=0}^{d-1}c_{j_1\cdots j_k}
A_{j_1}\cdots A_{j_k}=0.$$ This is a system of $D^2$ equations for $d^k$ unknowns which has at least $d^k-D^2$ independent solutions. Any such solution gives a null eigenvector of $\rho$. Thus for the density matrix of $k$ adjacent sites to have a null space, it is sufficient (but not necessary) that $ d^k\
>\ D^2. $ Let the null space of the reduced density matrix be spanned by the orthogonal vectors $|e_{\a}\ra, \a=1,\cdots, s$, then we can construct the local Hamiltonian acting on $k$ consecutive sites as $$h:=\sum_{\a=1}^s J_{\a} |e_{\a}\ra\la e_{\a}|,$$ where $J_{\a}$ are positive constants. The total Hamiltonian on the chain will then be given by the positive operator $ \ \
H=\sum_{l=1}^N h_{l, l+k},\ $ where $h_{l,l+k}$ is the embedding of $h$ into sites $l$ to $l+k$ of the chain. The state $|\psi\ra$ will then be a ground state of $H$. Equation ($d^k>D^2$) puts a stringent requirement on the dimensions of the matrices used in construction of a matrix product state. When dealing with spin $1/2$ with nearest-neighbor interactions, for which $d=2$ and $k=1$, it appears that the only admissible dimension for the matrices $A_0$ and $A_1$ is $D=1$, leading to a product state. However, it is crucial to note that the condition $d^2>D^2$ is only a sufficient and not a necessary condition for the density matrix $\rho$ to have a null space.\
To proceed with our construction, we require that the state satisfy some natural symmetries, i.e. spin-flip symmetry which, in the language of matrix product formalism, means that there is a matrix $X$ such that $$XA_{0}X^{-1}=\epsilon A_{1},\ \ \ XA_{1}X^{-1}=\epsilon A_{0},$$ where $\epsilon^2=1$. Working in the basis where $X=\sigma_z$, we find the general form of the matrices $A_0$ and $A_1$: $$A_0=\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)\h A_1=\epsilon\left(\begin{array}{cc} a & -b \\ -c & d
\end{array}\right).$$ Although these two matrices are not symmetric, the state constructed from them is symmetric under parity, since there is a matrix $\Pi=\left(\begin{array}{cc} b & 0
\\ 0 & c
\end{array}\right)$ with the property $$\Pi A_0^t\Pi^{-1}=A_0, \h \Pi A_1^t\Pi^{-1}=A_1.$$ We now consider the matrix equation (\[ccA\]) which in the present case is $$\label{cA}
c_{00}A_0^2 + c_{01}A_0A_1+c_{10}A_1A_0 + c_{11}A_1^2=0.$$ This is a set of linear equations for the four coefficients $c_{ij}$, which can be written as a matrix equation $MC=0$, leading to a non-zero solution when $$det(M)\equiv 16 b^2 c^2 (a-d)^2 (a+d)^2=0.$$ Thus we will find non-trivial models, for $a=d$ or $a=-d$. The models with $b=0$ or $c=0$ are not symmetric under parity, since in these cases the matrix $\Pi$ will not be invertible. We can always re-scale the matrices by a constant factor without affecting the matrix product state, so we set $a=1$ and use a subsequent gauge transformation $A_i\ro SA_iS^{-1}$ with $S=\left(\begin{array}{cc} c
& 0
\\ 0 & 1 \end{array}\right)$, to set $c=1$. Therefore we are left with the following four classes of models defined by the matrices $$\label{modelA}
A_0=\left(\begin{array}{cc} 1 & g \\ 1 & \eta\end{array}\right)\h A_1=\epsilon\left(\begin{array}{cc} 1 & -g \\ -1 &
\eta\end{array}\right),$$ where $g$ is a continuous parameter and $(\epsilon,\eta)=\pm (1,\pm
1) $. The four types of models are distinguished by the values of the pair $(\epsilon, \eta)$. The eigenvalues of the matrix $E=A_0\otimes A_0+A_1\otimes A_1$ are $ 2(\eta \pm g),\ \ 2(1\pm g)\
.$ The correlation functions can be derived from (\[1point\]). Consider for definiteness, the case $\eta=1$. The magnetization per site is found from (\[1point\]) to be $$\la \sigma_y\ra = \la \sigma_z\ra = 0,\h \la \sigma_x\ra =
\epsilon {u}\frac{1+u^{N-2}}{1+u^N},$$ where $u:=\frac{1-g}{1+g}$ and the correlation functions $G_a(1,r):= \la
\sigma_{a,1}\sigma_{a,r}\ra $ are similarly found to be as follows: $$\begin{aligned}
\label{correlations+1}
&&G_x(1,r) = \frac{u^2+u^{N-2}}{1+u^N}, \cr && G_y(1,r) =
\frac{u^{N-2}(u^2-1)}{1+u^{N}}, \ \ G_z(1,r) =
\frac{1-u^2}{1+u^{N}}.\end{aligned}$$ These correlation functions satisfy the following relations: $$\label{condition} G_x+G_y+G_z=1\ \ ,\ \ (1-G_z)(1-G_y)=\la \sigma_x\ra^2.$$ In the thermodynamic limit ($N\ro \infty$), discontinuities develop in these correlation functions at $g=0$. For example, the magnetization per site will be $$\la \sigma_x\ra = \epsilon \frac{1+|g|}{1-|g|},$$
\[newXg\] {width="5cm" height="3cm"}
and displayed in figure (2). Interestingly, the spins align themselves opposite to the magnetic field. To understand this, let us set for definiteness $\epsilon=\eta=1$ and consider the two-body Hamiltonian $H_2$, after subtracting the ferromagnetic term $-J{\sigma} \cdot {\sigma}$ for which any product state $|\phi\ra|\phi\ra$ is an eigenstate. For $|g|\gg 1$ the remaining Hamiltonian tends to $ H_2\sim
g^2(\frac{1}{2}\sigma_{1x}\sigma_{2x}+\sigma_{1x}+\sigma_{2x})$ , with the ground state $|x-\ra|x-\ra$ and for $|g|\ll 1$ it tends to $H_2\sim
(\frac{1}{2}\sigma_{1x}\sigma_{2x}-\sigma_{1x}-\sigma_{2x})$, with the ground state $|x+\ra|x+\ra$. It is thus the combination of the anti-ferromagnetic interaction in the $x$ direction and the magnetic field which align the spins opposite to the magnetic field. Even when we tune the coupling $J$ so that there is no $xx$ coupling, (i.e. $J=(1+g^2)/2$ ) this anti-alignment happens. To understand this, consider $g\approx 1$, where $H_2\approx -
\sigma_{1z}\sigma_{2z}+B(\sigma_{1x}+\sigma_{2x})$ with $B$ very small. The ground state of this Hamiltonian is $|\psi\ra\approx
|\phi_+\ra-B|\psi_+\ra$, where $|\psi_+\ra$ and $|\phi_+\ra$ are Bell states. One then finds that $\la
\sigma_{1x}+\sigma_{2x}\ra=-4B$, confirming figure (2).\
In order to see how the Hamiltonian is constructed, we solve equations (\[cA\]) which in view of (\[modelA\]) take the form $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
(1+g)(C_{00}+C_{11})+\epsilon(1-g)(C_{01}+C_{10}) &=& 0 \\
(1+\eta)(C_{00}-C_{11})-\epsilon(1-\eta)(C_{01}-C_{10}) &=& 0.\end{aligned}$$ It is easy to verify that the solution space is determined by the following two un-normalized vectors, $$\begin{aligned}
|e_1\ra &=& (1+\eta)|\psi_-\ra +(1-\eta)|\phi_-\ra\cr
&&\cr
|e_2\ra &=& (1+g)|\psi_+\ra-\epsilon(1-g)|\phi_+\ra,\end{aligned}$$ where $|\psi_{\pm}\ra=\frac{1}{\sqrt{2}}(|0,1\ra\pm|1,0\ra)$ and $|\phi_{\pm}\ra=\frac{1}{\sqrt{2}}(|0,0\ra\pm|1,1\ra)$ are Bell states. Under spin flip the above states transform as $|e_{1,2}\ra\lo \mp |e_{1,2}\ra$. The final local Hamiltonian will be given by $ h=J|e_1\ra\la e_1|+|e_2\ra\la e_2|, $ where $J$ is a non-negative parameter and we have used the freedom for rescaling the couplings of the Hamiltonian to set one of the parameters equal to 1. In view of the symmetry property of the vectors under spin flip, this Hamiltonian will be symmetric under spin flip. Expressing the above operator in terms of Pauli operators and subtracting constant terms, we find the total Hamiltonian which is written in (\[H\]) and (\[BJ\]).\
Note that the models with $\epsilon = \pm 1$ (sign of the magnetic field) are related by local $\pi $-rotations of spins around the $z$ axis where ($\sigma_{x,y}\ro -\sigma_{x,y}$). Also, the models with $\eta=\pm$ are related to each other by simultaneous rotations $R_x(\frac{\pi}{2})\otimes R_x(\frac{-\pi}{2})$ of spins on adjacent sites, under which we have ($\sigma_{z,i}\sigma_{z,i+1}\rightleftharpoons
-\sigma_{y,i}\sigma_{y,i+1})$. This is, of course, possible only when $N$ is even [@Nodd].\
The explicit form of such a ground state can also be determined. For $\eta=1$, the matrices $A_0$ and $A_1$ commute. By a similarity transformation which does not change the state (\[mat\]), both the matrices are made diagonal, $$A_0=\left(\begin{array}{cc} 1+\sqrt{g}& 0 \\ 0 &
1-\sqrt{g}\end{array}\right)\ , A_1=\left(\begin{array}{cc} 1-\sqrt{g}& 0 \\ 0 & 1+\sqrt{g}\end{array}\right)$$ and the MPS state (\[mat\]) will be given by $$\label{groundstate}
|\Psi\ra_{\eta=1} = \frac{1}{\sqrt{Z}}(|\phi_+\ra^{\otimes
N}+|\phi_-\ra^{\otimes N}),$$ where $$|\phi_{\pm}\ra =(1\pm \sqrt{g})|0\ra + (1\mp
\sqrt{g})|1\ra,$$ and $Z=2^{N+1}((1+g)^N+(1-g)^N)$. These expressions are valid for all values of $g$, provided that we replace $\sqrt{g}\lo i\sqrt{-g}$ when we consider negative values of $g$. Note that $\la \phi_+|\phi_-\ra=2(1-g)$. One can indeed check that the separate product states are ground states of $H$, (i.e. the local Hamiltonian acting on two adjacent sites, when added by a suitable constant, annihilates $|\phi_{\pm}\ra^{\otimes 2}$). However the advantage of the MPS state we have constructed $|\phi_{+}\ra^{\otimes N}+ |\phi_-\ra^{\otimes N}$ or the other state $|\phi_{+}\ra^{\otimes N}- |\phi_-\ra^{\otimes N}$ is that they are invariant under spin-flip transformation $\sigma_x^{\otimes
N}$. Thus even if the couplings of the Hamiltonian are not tuned as exactly as in (\[BJ\]), and are perturbed a little bit, first order perturbation theory guarantees that one of these entangled states and not the product states, will be the unique grounds state of $H$.\
We now come to the entanglement properties of the state (\[groundstate\]). At $g=1$ when $\la \phi_+|\phi_-\ra=0$, the state becomes a standard GHZ state, $\frac{1}{\sqrt{2}}(|0\cdots
0\ra+|1\cdots 1\ra)$. For other values of $g$, when $|\phi_+\ra$ and $|\phi_-\ra$ are no longer orthogonal, it can be named a generalized $GHZ$ state. Obviously such a state induces equal entanglement between any two spins regardless of their distance. To calculate this entanglement we determine the reduced two particle density matrix and use Wootters formula [@wootters], with the result [@conc]: $$C=\frac{4|g|}{|(1+g)^N+(1-g)^N|}|1-|g||^{N-2}.$$ Thus although the ring is not totally connected, the mutual entanglement of all pairs are equal and independent of their distances. Looking at the $N\gg 1$ limit, one can obtain the relation $$\label{scaling}
NC(\frac{g}{N},N)\approx \frac{2|g|e^{-|g|}}{\cosh g}.$$ One can interpret the left hand side as the total mutual entanglement of a spin with all the other spins, and the above equation as a universal scaling relation for this total entanglement.\
For the case $\eta=-1$, we use the transformation $U:=R_x(\frac{\pi}{2})\otimes R_x(\frac{-\pi}{2})$ on adjacent sites which transforms $H(\eta=1)$ to $H(\eta=-1)$ and act on (\[groundstate\]) by $U^{\otimes \frac{N}{2}}$ to obtain the ground state as $$\label{groundstate2}
|\Psi\ra_{\eta=-1} =
\frac{1}{\sqrt{Z}}(|\chi_+\ra|\chi_-\ra)^{\otimes
\frac{N}{2}}+(|\chi_-\ra|\chi_+\ra)^{\otimes \frac{N}{2}},$$ where $$|\chi_{\pm}\ra
=(1+\sqrt{g})|y,\pm\ra\ \pm\ i(1-\sqrt{g})|y,\mp\ra,$$ in which $|y,\pm\ra$ denote the eigenstates of $\sigma_y$. An alternative way for deriving this ground state and indeed the reason for its simple structure, is to note that for $\eta=-1$, although the matrices $A_0$ and $A_1$ do not commute, the pairs of matrices corresponding to Bell states, defined by $$\Phi_{mn}:=\frac{1}{\sqrt{2}}(A_{0}A_m+(-1)^nA_{1}A_{1+m}), \ \ m,n=0,1$$ commute with each other. The reader can verify that the states $|\chi\ra_{\pm}|\chi_{\mp}\ra$ are indeed linear combinations of the Bell states $\phi_{00}\equiv\phi_+$ , $\phi_{01}\equiv\psi_-$ and $\phi_{10}\equiv\psi_+$, making the state (\[groundstate2\]) a linear superposition of strings of various Bell states on adjacent sites.\
In summary we have introduced a two-parameter family of spin 1/2 ${\rm xyz}$ Heisenberg chains, with nearest neighbor interactions in an external magnetic field, for which ground states and all correlation functions can be calculated exactly. These states have two very interesting properties: first, they undergo a discontinuous (quantum phase) transition as one of the parameters passes a critical point, which stimulates further exploration of MPS-quantum phase transition in a set of important exactly solvable models. Second, they have the property that all the pairs of spins are equally entangled with each other. This makes them good candidates for engineering long-range entanglement in experimentally realizable arrays of qubits or spin systems. This study can be extended in several directions, including generalization to open chains, finding the excitations, perturbing them to explore more models, and actual engineering of such chains for small number of qubits for information processing tasks. We thank David Gross, of Imperial college, London for a very stimulating email correspondence which led to substantial improvement of this paper and also A. Langari and M. R. Rahimitabar, for their valuable comments. Corresponding author, V. Karimipour, vahid@sharif.edu.
[99]{} V. E. Korepin and O. I. Patu, preprint, Cond-Mat/0701491. I. Affleck, T. Kennedy, E.H. Lieb and H. Tasaki, Phys. Rev. Lett. [**59**]{}, 799 (1987). M. Fannes, B. Nachtergaele and R. F. Werner, Europhys. Lett. [**10**]{} 633, (1989), A. Klumper, A. Schadschneider, J. Zittartz, J. Phys. A [**24**]{} L955 (1991); Z. Phys. B, [**87**]{}, 281 (1992). M. M. Wolf, G. Ortiz, F. Verstraete, J. I. Cirac, Phys. Rev. Lett.[**97**]{}, 110403 (2006); D. Peres Garcia, et al, quant-ph/0608197. F. Verstraete, M.A. Martin-Delgado, J.I. Cirac, Phys. Rev. Lett. 92, 087201 (2004); F. Verstraete, M. Popp, J.I. Cirac, Phys. Rev. Lett. 92, 027901 (2004); F. Verstraete, J.I. Cirac, J.I. Latorre, E. Rico, M.M. Wolf, Phys.Rev.Lett. 94 (2005) 140601. H. Fan, V. E. Korepin and V. Roychowdhury, Phys. Rev. Letts., [**93**]{}, 227203 (2004); A. R. Its, B.-Q. Jin, and V. E. Korepin, Jour. Phys. A. Math. Gen. [**38**]{}, 2975 (2005). A. Osterloh, L. Amico, G. Falci and R. Fazio, Nature 416, 608 (2002); T. J. Osborne, and M. A. Nielsen, Phys. Rev. A 66, 032110 (2002); M. C. Arnesen, S. Bose, and V. Vedral, Phys. Rev. Lett. [**87**]{}, 277901 (2001); M. Cozzini, Radu Ionicioiu, and Paulo Zanardi, Quantum fidelity and quantum phase transition in matrix product states, Cond-mat/0611727. K. M. O’Connor and W. K. Wootters, Phys. Rev. A 63 (2001) 052302; M. Asoudeh, and V. Karimipour, Phys. Rev. A [**70**]{}, 052307 (2004). Here we are exlcuding the application of the MPS formalizm in stochastic systems, where ground state of non-hermitian matrices represent steady state of a stochastic process, B. Derrida, M.R. Evans, V. Hakim, V. Pasquier, J. Phys. A, [**26**]{}, 1493 (1993), V. Karimipour, Phys. Rev. E59, 205 (1999). S. Sachdev, [*Quantum Phase Transitions*]{} (Cambridge University Press, Cambridge, 1999). W. K. Wootters, Phys. Rev. Letts. [**80**]{}, 2245 (1998). With conditions (\[condition\]), the matrix $\rho\tilde{\rho}$ has only two non-zero eigenvalues given by $
\lambda_{1,2}=\frac{1}{4}(G_y\pm G_z)^2,
$ which gives $
C=|\sqrt{\lambda_1}-\sqrt{\lambda_2}|.
$ For $\eta=-1$ and $N=odd$ such an MPS does not exist, i.e. $\Psi$ is identical to zero. One can prove this by induction using the facts that $tr(A_0)=tr(A_1)=0$, and $A_0^2, A_1^2$ and $A_0A_1+A_1A_0$ are all proportional to identity.
|
---
author:
- 'A. Shulevski'
- 'P. D. Barthel'
- 'R. Morganti'
- 'J. J. Harwood'
- 'M. Brienza'
- 'T. W. Shimwell'
- 'H. J. A. Röttgering'
- 'G. J. White'
- 'J. R. Callingham'
- 'S. Mooney'
- 'D. A. Rafferty'
bibliography:
- '3C236\_lofar.bib'
title: LOFAR first look at the giant radio galaxy 3C 236
---
[We have examined the giant radio galaxy 3C 236 using LOFAR at 143 MHz down to an angular resolution of $ 7\arcsec $, in combination with observations at higher frequencies. We have used the low frequency data to derive spectral index maps with the highest resolution yet at these low frequencies. We confirm a previous detection of an inner hotspot in the north-west lobe and for the first time observe that the south-east lobe hotspot is in fact a triple hotspot, which may point to an intermittent source activity. Also, the spectral index map of 3C 236 shows that the spectral steepening at the inner region of the northern lobe is prominent at low frequencies. The outer regions of both lobes show spectral flattening, in contrast with previous high frequency studies. We derive spectral age estimates for the lobes, as well as particle densities of the IGM at various locations. We propose that the morphological differences between the lobes are driven by variations in the ambient medium density as well as the source activity history.]{}
Introduction {#intro}
============
Giant radio galaxies (GRGs) are radio galaxies whose radio emitting regions (jets, lobes) extend over projected distances $ \geq 1 $ Mpc [@RefWorks:222; @RefWorks:40; @RefWorks:254; @RefWorks:236; @RefWorks:101]. Their morphology can be classified as core-brightened FRI or edge-brightened FRII [@Fanaroff1974]. There is no evidence that they are particularly more energetic than the general population of radio galaxies [e.g. @Lara2001].
A low-density environment may be the key factor enabling their growth to such large sizes. [@RefWorks:148] have indeed found that the surrounding medium for their sample of GRGs is an order of magnitude less dense than that around smaller radio sources. Hence the radio sources can expand freely, implying that expansion losses rather than radiative losses are the dominant energy loss mechanism for the relativistic particle populations in the radio lobes.
Apart from their size, GRGs are not fundamentally different from other radio galaxies, and they are expected to be subject to the same processes that are present in smaller radio galaxies. The AGN that power them go through a cycle of active and inactive periods [e.g. @McNamara2007; @Morganti2017]. Hence, we might expect GRGs to show evidence of multiple activity episodes, both in their radio morphology and spectra. They may exhibit double-double morphologies [some examples can be found in the sample of @Malarecki2013] and show signs of spectral curvature indicating radiative ageing of the relativistic particles responsible for their extended radio emission.
There are several studies of the ages of GRGs using radio data. [@RefWorks:148] have performed radiative ageing analysis of five giant radio galaxies (including 3C 236), obtaining ages less than 400 Myr. More recently [@Hunik2016] have presented a restarted giant radio galaxy for which they derive a radiative age of 160 Myr. Also, Cantwell et al. (submitted) studied NGC 6251 and found ages raging from 50 Myr to greater than 200 Myr. [@Orru2015] have studied the famous double-double GRG B1864+620 and showed that the source ages derived for its outer pair of radio lobes indicate that the AGN activity has stopped in the recent past.
For many years following its discovery in the late 1950s, 3C 236 was an unresolved source. It was catalogued as such in the first 3C catalogue and kept its status up to and including the study of [@RefWorks:224]. However, using the Westerbork Synthesis Radio Telescope (WSRT), [@RefWorks:222] discovered low surface brightness radio lobes emanating from the compact source, extending over a total projected linear size 4.5 Mpc ($ z = 0.1005 $)[^1]. For decades, it was the largest known radio galaxy [but see @RefWorks:236 for the current record holder], hence 3C 236 is listed as a GRG in radio survey catalogues.
[@RefWorks:130] investigated the radio morphology at a variety of scales. They noted that the low surface brightness emission of the lobes, especially the north-west (NW) one, shows a large-scale (300 kpc) wiggle, possibly associated with the jet slightly changing its orientation over the source’s lifetime (see their Figure 4). The NW lobe terminates in a diffuse plateau, and there is an inner hotspot embedded in it, which may indicate a separate episode of AGN activity or intermittent accretion. The south-east (SE) lobe is more extended and shows a double hotspot morphology which the authors suggest may be caused by an oblique shock deflecting the jet. [@RefWorks:225] studied the spectral index variations across the lobes and found that the spectral index steepens going from the outer edges of the lobes towards the host galaxy, similar with that observed in (hotspot dominated) FRII radio galaxies.
The host galaxy of 3C 236 has been studied in detail by @RefWorks:227, @RefWorks:228 and @RefWorks:51. Hubble Space Telescope (HST) imaging has revealed repeated bursts of star formation (on timescales of $ \sim 10^{7} $ and $ \sim 10^{9} $ yrs) in a warped dusty disk surrounding the AGN. This suggests that the younger starburst may be connected to the AGN reactivation which produced the currently active Compact Steep Spectrum (CSS) radio source at its centre [@RefWorks:51]. Thus, 3C 236 is an example of a radio galaxy showing signs of multiple epochs of radio activity.
The central regions of this radio galaxy are rich in gas. Atomic neutral hydrogen was revealed as a deep narrow absorption feature near the systemic velocity by [@vanGorkom1989]. The distribution of this gas was studied at high spatial resolution using VLBI techniques by [@Struve2012], who speculate about the radio source interacting with the cold ISM gas. [@Morganti2005] have discovered a broad and shallow wing, blueshifted up to 1000 [$\,$km$\,$s$^{-1}$]{}. This absorption is associated with a fast outflow (a feature shared by a number of restarted radio galaxies), and has been recently traced also on VLBI (pc) scales [@Schulz2018]. The presence of cold molecular gas (traced by CO) was found by [@Labiano2013], using the Plateau de Bure Interferometer at 209.5 GHz. The gas appeared to be rotating around the AGN, and was observed to be co-spatial with the dusty disk in which the AGN is embedded.
With the advent of the LOw Frequency ARray [LOFAR; @RefWorks:157] it is now possible, for the first time, to study the extended GRG morphology in a comprehensive manner at very low frequencies. LOFAR is sensitive to extended low surface brightness features due to its dense sampling of short spacings in the UV plane, while at the same time enabling high spatial resolution imaging leveraging its long (Dutch) baselines.
Within the framework of the LOFAR Surveys Key Science Project, the nearby-AGN working group has observed two GRGs: 3C 236 and NGC 6251. These are among the largest GRGs and have never been studied in such detail as LOFAR can provide in its frequency range. In this work, we present the results related to 3C 236. Our goal is to perform high resolution mapping of the radio morphology of its extended structure at the lowest frequencies to date, enabling us to trace the oldest emission regions. Our aim is also to extend the (resolved) spectral index studies of this object a factor of two lower in frequency compared to previous studies. This enables us to place tighter constraints on the source energetics and activity history, tying in with previous studies of this object.
The organization of this work is as follows. Section \[data\] describes the observations and the reduction procedure. In Section \[res\] we outline our results, we discuss them in Section \[dis\] and conclude with Section \[con\].
Observations {#data}
============
LOFAR observations
------------------
The observations were performed with the LOFAR telescope operating in high-band (HBA) mode, for a total on-source time of 8 hours, on the morning of October 9, 2018. Details of the observation are given in Table \[table:obs\]. Main and secondary calibrator scans were also performed before and after the target observation, ten minutes each in duration.
[ll]{}\
Project code & LT10\_010\
Central Frequency \[MHz\] & 143.65\
Bandwidth \[MHz\] & 47\
Integration time & 1 second\
Observation duration & 8 hours\
Polarization & Full Stokes\
Primary flux reference & 3C 147\
The data were initially pre-processed (flagged and averaged) by the LOFAR observatory pipeline [@RefWorks:180]. The station gains were determined using the main calibrator pointing and set to the @RefWorks:181 flux density scale.
{width="45.00000%"} {width="45.00000%"}
To image the entire field of view at these low frequencies, the influence of the ionosphere has to be properly taken into account. The observation used was part of the ongoing LOFAR Two-metre Sky Survey (LoTSS) project and the data were processed using its reduction pipelines which perform directional (self) calibration and imaging. For a full description, please refer to [@Shimwell2017; @Shimwell2019].
3C 236 was the brightest source in the field, and our main source of interest, so we did not calibrate and image across the entire FoV, focusing only on the area around the target (Tasse et al., van Weeren et al. in prep.). The calibrated data set (with uv-coverage as shown in Figure \[3C236:uv\]) was imaged with WSclean [@Offringa2014] using multi-scale cleaning; scales of $ 0 - 2^{n}\, , n = [2, 6] $ pixels. The image shown in the main panel of Figure \[3C236:map\] was obtained using a UV taper of $ 7.4\,\mathrm{k}\lambda $ and Briggs weights with robustness set to $ -0.5 $. To emphasize source structure on the smallest scale, we have imaged without tapering using the same weights as described previously. The final image properties are listed in Table \[table:imgs\].
LOFAR flux densities are known to suffer from a systematic effect when the amplitude scale is set by transferring the gains from calibrator to target pointing. Different elevations of the target and calibrator sources will yield different gain normalization of the LOFAR beam pattern, which can appear as a frequency dependent flux density offset [@Shimwell2019]. To further verify our flux scale as measured from the images we have obtained, a check was performed measuring the flux density of the unresolved bright core as well as another nearby point source and comparing it with catalogue values; we found a residual flux excess of 42% which we corrected for by down-scaling the LOFAR image.
Literature data
---------------
In order to perform the spectral study of 3C 236, we have collected images from the literature that trace the emission of the extended lobes and could be combined with the LOFAR data. This combination needs to be done with care and in Section \[spec\_ind\] we comment more on the caveats.
We have used legacy Very Large Array (VLA) survey (NVSS[^2]), as well as Westerbork Synthesis Radio Telescope (WSRT) images. The image properties are listed in Table \[table:imgs\]. The mid and high resolution LOFAR images are shown in Figure \[3C236:map\]. The images collected allows us to produce spectral index maps and derive the spectral curvature and ages of the lobes.
In Figure \[3C236:int\] we plot the integrated source flux density measurements taken from the study of [@Mack1997] (with frequency coverage up to 10550 MHz, given in Table \[table:intflux\]), together with those measured in our low resolution LOFAR map and the NVSS map, both listed in Table \[table:imgs\].
![Integrated flux density of 3C 236.[]{data-label="3C236:int"}](Int_spec-eps-converted-to.pdf){width="50.00000%"}
The LOFAR integrated flux density (marked in red) shows a slight flattening of the integrated source spectrum at low frequencies compared to the points at high frequency. This is to be expected, as we shall discuss in the forthcoming sections of this work. This flattening was hinted at in previous studies [e.g. @Mack1997]. As can be discerned from Figure \[3C236:int\], the flux density scale of our LOFAR observations is as expected, within the errors.
[c c c c c]{}\
Instrument & $ \nu $ \[MHz\] & $ \Delta \nu $ \[MHz\] & $ \sigma $ \[mJy/b\] & Beam size\
\
LOFAR & 143.65 & 46.87 & 0.26 & $ 11.77\arcsec \times 6.82\arcsec $\
LOFAR & 143.65 & 46.87 & 0.5 & $ 23.81\arcsec \times 19.18\arcsec $\
LOFAR & 143 & 53 & 3.0 & $ 50.0\arcsec $\
WSRT & 609 & - & 0.7 & $ 48\arcsec \times 28\arcsec $\
VLA & 1400 & 42 & 0.4 & $ 45\arcsec $\
[l l]{}\
$ \nu $ \[MHz\] & $ S_{\mathrm{int}} $ \[mJy\]\
\
143 & $ 17744 \pm 3568.7 $\
326 & $ 13132 \pm 140 $\
609 & $ 8227.7 \pm 90.8 $\
2695 & $ 3652 \pm 90.8 $\
4750 & $ 2353.5 \pm 71.2 $\
10550 & $ 1274.7 \pm 41.7 $\
Results {#res}
=======
The total intensity structure of 3C 236 at 143 MHz
--------------------------------------------------
The intermediate resolution image ($ 23.8\arcsec \times 19.2\arcsec $) of 3C 236 obtained with LOFAR is shown in the main panel of Fig. \[3C236:map\], while the insets zoom in on the two lobes and show their morphology as it appears in our high resolution LOFAR image. Figure \[3C236:spix\] shows the contour levels of the emission at lower resolution ($ 50\arcsec $), emphasizing better some of the new low surface brightness features detected by LOFAR. An overview of the images is presented in Table \[table:imgs\]. The map presented in Figure \[3C236:map\] shows some residual artifacts due to limitation in the calibration around the stronger sources (e.g., the central, very bright compact core of 3C 236), while the regions of diffuse emission are less affected.
Despite the huge source size (more than half a degree), the LOFAR observations recover the full extent of the source showing, for the first time, its structure at low frequencies and relatively high spatial resolution. The image reproduces well the main (previously known) morphological features of 3C 236 [@RefWorks:130].
The overall structure (about $40\arcmin$ in size, corresponding to $ 4.32 $ Mpc) consists of two very elongated lobes. The north-west (NW) lobe radio emission is more extended transversely to the longitudinal symmetry axis (jet propagation direction) compared to the south-east (SE) lobe (about $ 4\arcmin $ and $ 2\arcmin $ in width towards the NW and SE respectively). At the resolution of the LOFAR observations, the central region including the restarted CSS is unresolved. The asymmetry of the large-scale structure, with the SE lobe extending farther away from the core compared to the NW, is also confirmed by the LOFAR images. Given the size of the source, projection effects are unlikely and hence the source asymmetry must be intrinsic.
The LOFAR images (especially the one at intermediate resolution) show that both lobes extend all the way to the core. The emission connecting the SE lobe to the core has very low surface brightness (around $ 0.5$ [mJy beam$^{-1}$]{}), nevertheless maintaining its width for the full length of the lobe (see the intensity contours in Fig. \[3C236:spix\]). There are no signs of spurs or very extended emission transverse to the lobe extent, with the exception of the extension to the south in the NW lobe, right next to the core (Figure \[3C236:spix\]). This extension was earlier seen at higher frequencies, although much weaker [@RefWorks:130]. It is reminiscent of structures created by back-flows of plasma deposited in the lobe by the jet seen in other [e.g., X-shaped, @Leahy1984; @Hardcastle2019; @Cheung2009; @Saripalli2018] radio galaxies.
The high spatial resolution images of the lobes (seen in the insets of Fig. \[3C236:map\]) show that in the NW lobe, the leading edge does not show a hotspot, but only a diffuse enhancement in surface brightness. However, as first noted by [@RefWorks:130], there is a compact region in the middle of the lobe, confirmed by the LOFAR image (Fig. \[3C236:map\]). This inner lobe region is split in two clumps (marked by a dashed ellipse in Figure \[3C236:map\]), the leading one of which is probably a jet termination/hotspot. Structures of this type are seen in other objects [c.f. @Malarecki2013]. The location of the hotspot within the lobe suggests that it may trace a separate event of source activity, propagating through the lobe, or tracing an interruption (flicker) in the accretion history of the activity episode that produced the large-scale lobes. At the end of the SE lobe, a double hot-spot appears to be present (see bottom right inset in Fig. \[3C236:map\]). For the first time, we observe that the southern hotspot of the pair has itself a double structure, having two distinct brightness peaks (labeled H2 and H3 in the lower right inset of Figure \[3C236:map\]). This may be a sign of a jet interaction with IGM gas [e.g. @Lonsdale1986], possibly indicating that the jet used to terminate at the location of the H1 hotspot, then the termination point moved to H2 and currently is located at H3. This is consistent with the hypothesis that the most compact hotspot is where the jet terminates at the current epoch [@Hardcastle2001]. Also, it would explain why the SE lobe has a steeper spectrum along the northern edge (discussed below). It is also possible that H1 and H2 are created by lateral emission of plasma from the H3 termination shock. In the 3CR sample, one-sixth of the sources have multiple hotspots; the statistics vary depending on the definitions used and source samples under consideration [@Valtaoja1984].
The differences in the structure of the lobes in 3C 236 suggest that they are not only the results of intermittence in the activity and/or changes in the position angle of the jet. Other effects due to e.g. the propagation of the jets in the inner region must have affected the large-scale morphology.
Given the overall size of the source (more than half a degree), several unresolved background sources are embedded in the lobe emission. Their positions are noted in [@Mack1997]. Some of these sources are relatively bright, but we find that they do not present an issue for our analysis.
{width="\textwidth"}
{width="\textwidth"}
![Spectral index error (top panel) and spectral index profiles along the paths shown in Fig. \[3C236:spix\]. The profile paths start in the inner part of the lobes. The shaded area in the spectral profile plots represents the spectral index error.[]{data-label="3C236:profiles"}](3C236_spixerr-eps-converted-to.pdf "fig:"){width="50.00000%"}\
![Spectral index error (top panel) and spectral index profiles along the paths shown in Fig. \[3C236:spix\]. The profile paths start in the inner part of the lobes. The shaded area in the spectral profile plots represents the spectral index error.[]{data-label="3C236:profiles"}](NW_lobe_spix_profile.pdf "fig:"){width="50.00000%"} ![Spectral index error (top panel) and spectral index profiles along the paths shown in Fig. \[3C236:spix\]. The profile paths start in the inner part of the lobes. The shaded area in the spectral profile plots represents the spectral index error.[]{data-label="3C236:profiles"}](SE_lobe_spix_profile.pdf "fig:"){width="50.00000%"}
Spectral index {#spec_ind}
--------------
We have derived a low frequency spectral index ($ S \propto \nu^{\alpha} $) map between 143 and 609 MHz using the images listed in Table \[table:imgs\], implementing the following procedure. The 609 MHz image [from @Mack1997] was first re-gridded to a J2000 epoch, then we registered the lowest resolution 143 MHz and the 1400 MHz image to the same pixel scale as the 609 MHz image. Finally, we convolved the images with a Gaussian kernel resulting in a circular PSF of $ 50\arcsec $. The re-gridding and convolution were performed using the [imregrid]{} and [imsmooth]{} [CASA]{} [@McMullin2007] tasks.
Producing spectral index images from datasets taken with different telescopes is always subject to caveats. In particular, observations at different frequencies must be sensitive to similar spatial structures. The UV-coverage of the datasets can help in checking whether this condition is fulfilled. The UV-range of the mid-resolution LOFAR image (from which the lowest resolution LOFAR map is obtained by convolution) is well matched to that of the WSRT image (upper cut of around $ 7\,\mathrm{k}\lambda $) , so the low frequency spectral index derivation is unaffected by spatial filtering. The NVSS image has relatively few short spacings, i.e., it is less sensitive to extended emission [@RefWorks:139; @Jamrozy2004]. We keep this limitation in mind when interpreting our (spectral ageing) results.
The flux density scale was taken to be uncertain at a level of 20% for the LOFAR [@Shimwell2017] and 5% for the WSRT [@Mack1997] and VLA [@RefWorks:139] observations, respectively. We added these uncertainties in quadrature to the r.m.s. errors of the corresponding maps.
We have derived the $ \alpha_{143}^{609} $ spectral index using the standard expression: $ \alpha = \log(S_{1} / S_{2}) / \log(\nu_{1} / \nu_{2}) $. We show the results in Figure \[3C236:spix\], restricted to pixels having flux density values above $5\sigma$ in the corresponding images. The spectral index errors were obtained by error propagation of the flux density errors in the corresponding images:
$$\delta\alpha = \frac{1}{\ln\frac{\nu_{1}}{\nu_{2}}}\sqrt{\left(\frac{\delta S_{1}}{S_{1}}\right)^{2} + \left(\frac{\delta S_{2}}{S_{2}}\right)^{2}}$$
here, $ \delta S $ represents the flux density measurement error. The resulting spectral index error map is shown in Fig. \[3C236:profiles\], top panel.
We have also measured flux densities in eleven characteristic regions, (along the lobes, encompassing the lobe termination regions and across the NW lobe). These numbered regions are listed in Table \[table:spix\_regs\], and indicated in Fig. \[3C236:spix\]. For each region we have computed the spectral index to investigate whether differences with the values reported in the spectral index map (which samples smaller spatial scales) are present. We find no significant deviations, indicating the robustness of the spectral index map. Also, we show the spectral index profiles along both lobes; the profiles are indicated as dashed lines in Fig. \[3C236:spix\].
[c c c c c c c c c]{}\
Region ID & & Model & $ \alpha_{\mathrm{inj}} $ & Spectral age \[Myr\] & & $ \small \chi^{2}_{\mathrm{red}} $\
\
$ 1 $ & & $ -0.82 \pm 0.14 $ & & JP & $ - $ & $ - $ & & $ - $\
$ 2 $ & & $ -0.74 \pm 0.14 $ & & JP & $ -0.57 $ & $ 51.3^{+7.2}_{-6.9} $ & & $ 0.01 $\
$ 3 $ & & $ -0.64 \pm 0.14 $ & & JP & $ -0.20 $ & $ 140.3^{+6.4}_{-1.5} $ & & $ 0.89 $\
$ 4 $ & & $ -0.32 \pm 0.14 $ & & JP & $ -0.20 $ & $ 116.7^{+3.5}_{-6.4} $ & & $ 2.63 $\
$ 5 $ & & $ -0.60 \pm 0.15 $ & & JP & $ - $ & $ - $ & & $ - $\
$ 6 $ & & $ -0.60 \pm 0.15 $ & & JP & $ -0.20 $ & $ 153.2^{+6.3}_{-6.3} $ & & $ 3.56 $\
$ 7 $ & & $ -0.88 \pm 0.14 $ & & JP & $ -0.54 $ & $ 159.2^{+2.5}_{-2.6} $ & & $ 0.00 $\
$ 8 $ & & $ -0.70 \pm 0.14 $ & & JP & $ -0.20 $ & $ 135.3^{+5.5}_{-1.9} $ & & $ 0.06 $\
$ 9 $ & & $ -0.67 \pm 0.14 $ & & JP & $ -0.40 $ & $ 83.8^{+3.5}_{-5.3} $ & & $ 0.03 $\
$ 10 $ & & $ -0.68 \pm 0.14 $ & & JP & $ -0.40 $ & $ 91.6^{+2.8}_{-8.0} $ & & $ 0.12 $\
$ 11 $ & & $ -0.59 \pm 0.14 $ & & JP & $ -0.40 $ & $ 75.6^{+3.1}_{-9.6} $ & & $ 0.67 $\
\
& & & & & $ \nu_{\mathrm{br}} $ \[MHz\] &\
& 143\[MHz\] & 609\[MHz\] & 1400\[MHz\] & & & & &\
\
NW lobe & $ 5030 \pm 80 $ & $ 1930 \pm 390 $ & $ 730 \pm 150 $ & CI & $ -0.55 $ & $ 129.1^{+41.2}_{-30.6} $ & $ 479 $ & $ 0.50 $\
SE lobe & $ 3580 \pm 100 $ & $ 1260 \pm 250 $ & $ 580 \pm 120 $ & CI & $ -0.55 $ & $ 117.1^{+41.1}_{-30.9} $ & $ 582 $ & $ 0.01 $\
\
The spectral index map shows that the outer lobe regions have spectral index values (between $ -0.5 $ and $ -0.65 $) typical regions with ongoing particle acceleration. This is also observed in the embedded hotspot region in the NW lobe and in the hotspot in the SE lobe (although here the spectral index is around $ -0.1\mathrm{dex} $ steeper). The spectral index generally steepens (see bottom two panels in Figure \[3C236:profiles\]) toward the edges of the lobes and toward the core (consistent with the FRII overall source morphology), indicating loss of energy and (spectral) ageing of the particle populations as they back-flow in the direction of the core. However, curiously, the SE lobe has a region of very flat spectral index in its core-facing end; a hint of this region is also observed in the higher frequency spectral index maps of [@RefWorks:148]. These trends are shown in the spectral index profiles of the lobes presented in Figure \[3C236:profiles\]. The SE lobe has the flattest spectral index along its southern edge. There is a transition to steeper spectral index values of $ \sim -0.9 $ along the northern edge of its outer region. Whereas the interpretation of the spectral index map in this region is not straightforward, the observed steepening could be real at least in some parts and warrants further investigation in future studies of this object.
[@RefWorks:148] derive $ \alpha_{326}^{609} $ spectral indices for the NW lobe of around $ -1 $ for the outer regions to $ -1.2 $ going toward the core. Our $ \alpha_{143}^{608} $ spectral index map shows much flatter spectral index values, around $ -0.5 $ to $ -0.6 $, (c.f. the spectral profiles in Figure \[3C236:profiles\]) meaning that LOFAR detects the particle population related to the primary injection of energy in the lobe. For the SE lobe, the agreement between our spectral index values and those derived by [@RefWorks:148] is better, although we have flatter values, with $ \alpha_{143}^{609} \sim -0.6 $ versus their values of $ \alpha_{326}^{609} \sim -0.8 $ for the outer lobe. We have derived the spectral index for several regions in the source (Table \[table:spix\_regs\]) to test whether the values we obtain in our spectral index map are reliable; the spectral index values per region match those from the map. High resolution mapping helps to disentangle the detailed spectral behaviour.
Source energetics and radiative ages
------------------------------------
Before discussing the 3C 236 energetics, we estimate the magnetic field value throughout the source. We make the assumption that the field structure in the lobes is uniform and oriented perpendicular to the line of sight. We use cylindrical geometry for the lobes, and calculate the magnetic field strength assuming that equal energy is contained in the relativistic particles and the field (equipartition assumption). Furthermore, we set the ratio of electrons to protons to unity, as well as the filling factor of the lobes. We adopt limits of the spectral range from 10 MHz to $ 10^{5} $ MHz, and we set the spectral index of the radiation in this range to $ \alpha = -0.85 $ (taking into account the observed values at low frequencies, as well as assuming spectral steepening to higher frequencies). Using the relation by [@Miley1980], calculating at a frequency of 143 MHz (Table \[table:fluxes\]) and averaging over both lobes, we obtain $ \mathrm{B} = 0.75 \, \mathrm{\mu} $G for the equipartition magnetic field strength. As was noted by [@Brunetti1997], the equipartition field calculated in this manner should be corrected, to take into account a low energy cut-off value for the spectrum of the particles and a value for the particle spectral index at injection time. With $ \gamma_{min} = 200 $, and $ \mathrm{\alpha}_{\mathrm{inj}} = -0.75 $ (average low frequency spectral index in the lobes), we find $ \mathrm{B} = 1.28 \, \mathrm{\mu} $G for the average source magnetic field, a value we will be using further in our analysis. This value of the magnetic field is lower than the CMB equivalent magnetic field at the source redshift ($ B_{CMB} = 3.93 \, \mathrm{\mu} $G). Thus, the dominant energy loss mechanism of the relativistic particles generating the synchrotron radio emission will be inverse Compton scattering off the omnipresent CMB photons.
The spectral ages of the emitting regions are calculated using two different approaches: first for the regions defined in Figure \[3C236:spix\] and second for measurement regions encompassing the NW and SE lobes separately, avoiding embedded point sources and measuring out to the $ 5\sigma $ contour in the 143 MHz image. We have used the [fitintegrated]{} and [fitcimodel]{} tasks of the [BRATS]{}[^3] software package [@Harwood2013; @Harwood2015] for the two cases respectively. The fitting was performed using flux density measurements at three different frequencies, using the low resolution LOFAR image and the WSRT and VLA images listed in Table \[table:imgs\]. In the [fitintegrated]{} task we fitted a Jaffe-Perola [JP, @Jaffe1973] model, and the [fitcimodel]{} task fits a continuous injection (CI) model to the integrated flux densities for the source regions under consideration. The CI model was used when modelling the lobes assuming that they encompass regions where particle acceleration is ongoing, which can not be stated for (all) of the smaller measurement regions. Although the models do not give intrinsic ages [@Harwood2017], they are useful to address the source properties. The injection spectral index was kept fixed for each fitting run, and the fitting was performed over a search grid spanning values from $ \alpha_{\mathrm{inj}} = -0.2 $ to $ \alpha_{\mathrm{inj}} = -0.9 $. The derived spectral ages and spectral break frequencies (in the CI fit case) resulting from the fitting procedure are shown in Table \[table:spix\_regs\]. The average reduced $ \chi^{2} $ measure for the goodness of the fit (one degree of freedom) is given in the last column. The fit results are shown in Figure \[CI\_fits\].
![CI model fits as reported in the lower section of Table \[table:spix\_regs\].[]{data-label="CI_fits"}](NW_fit.pdf "fig:"){width="50.00000%"}\
![CI model fits as reported in the lower section of Table \[table:spix\_regs\].[]{data-label="CI_fits"}](SE_fit.pdf "fig:"){width="50.00000%"}
The derived ages for individual regions indicate older plasma as one goes from the lobe outer edges toward the core, consistent with what would be expected if the main acceleration regions are located in the outer lobes. The injection spectral indices which best fit the data are not steep, indicating that LOFAR is probing particles which have retained their energy since their acceleration phase. We will discuss the spectral ages further in Section \[dis\].
The particle energy assuming equipartition can be expressed as [@RefWorks:148; @Pacholzcyk1970]
$$\mathrm{E}_{\mathrm{eq}} = \dfrac{7}{4}\left[(1+\mathrm{k})\mathrm{c}_{\mathrm{12}}\mathrm{P}\right]^{\frac{4}{7}}\left(\dfrac{\mathrm{\Phi} \mathrm{V}}{\mathrm{6\pi}}\right)^{\frac{3}{7}}$$
where $ \mathrm{\Phi} = 1 $ is the volume filling factor, $ \mathrm{V} $ the volume of the region filled with relativistic particles and fields, $ \mathrm{k} $ (= 1) the electron to proton ratio, $ \mathrm{P} $ the radio luminosity integrated over the earlier specified frequency range, for the regions under consideration, and $ \mathrm{c}_{\mathrm{12}} $ is a constant [in our case $ \mathrm{c}_{\mathrm{12}} = 1.6 \times 10^{7} $; @Pacholzcyk1970].
Assuming that the lobes are in pressure balance with the intergalactic medium (IGM), the relativistic gas pressure in the lobes, $ \mathrm{p}_{\mathrm{l}} $[^4] should balance with the gas pressure of the IGM ($ \mathrm{p}_{\mathrm{IGM}} = \mathrm{n}_{\mathrm{IGM}}\mathrm{kT} $), where $ \mathrm{k} $ is the Boltzmann constant and $ \mathrm{T} $ the temperature of the IGM in degrees Kelvin. Adopting $ \mathrm{T} = 10^{7} $K, we can roughly estimate the IGM particle density values $ \mathrm{n}_{\mathrm{eq}} = \frac{\mathrm{e}_{\mathrm{eq}}}{3\mathrm{kT}} $ [@RefWorks:148; @Hunik2016], and list the resulting values in Table \[table:fluxes\].
[c c c c c c c]{}\
ID & $\mathrm{S}_{\mathrm{143}}$ \[Jy\] & $\mathrm{L}_{\mathrm{143}}$ \[W$\mathrm{Hz}^{-1}$\] & $ \mathrm{E}_{\mathrm{eq}} $ \[J\] & $ \mathrm{V} $ \[$ \mathrm{m}^{3} $\] & $ \mathrm{n}_{\mathrm{IGM}} $ \[cm$ ^{-3} $\] & $ \mathrm{P} $ \[Pa\]\
NW lobe & $ 5.0 $ & $ 1.2 \times 10^{26} $ & $ 2.0\times10^{53} $ & $ 0.9\times10^{67} $ & $ 5.2 \times 10^{-5} $ & $ 7.2\times10^{-15} $\
Core & $ 9.0 $ & $ 2.2 \times 10^{26} $ & - & - & - & -\
SE lobe & $ 3.6 $ & $ 8.9 \times 10^{25} $ & $ 1.7\times10^{53} $ & $ 1.0\times10^{67} $ & $ 4.2\times10^{-5} $ & $ 5.8\times10^{-15} $\
\
Discussion {#dis}
==========
Our LOFAR imaging recovers the source structure as described previously in the literature [@RefWorks:130; @Mack1997], and discussed in the previous section of this work. Owing to the high surface brightness sensitivity in our LOFAR images, we can now trace the SE lobe emission all the way to the core even in the intermediate resolution maps. We note that the NW lobe is shorter than the SE lobe. Interestingly, this asymmetry is inverted for the small-scale emission of the CSS core. There, the NW extension is longer than the SE as seen by [@RefWorks:258]. They also speculate that the dust lane imaged by HST close to the core may be part of the material that helps collimate the radio emission.
As was noted in Section \[intro\], there is a hint of a slight wiggle of the ridge line connecting the core and the outer lobe edges. It is visible in Figure \[3C236:map\] as a departure from the symmetry axis in the NW lobe, where the inner hotspot and the outer diffuse region are not on a straight line to the core. This may be due to the wobble of the jet as it drills through the IGM/ICM over time. In this context, the appearance of the SE lobe hotspot is intriguing. It was described as a double hotspot in the literature [@RefWorks:130]; now, using LOFAR, we can see that it is in fact a triple hotspot; the southern component is split in two (Figure \[3C236:map\]). It may be that the jet was deflected at the hotspot producing the observed morphology, or that the jet working surface has shifted over time. [@Lonsdale1986] have suggested that such hotspots can originate from a flow redirection in the primary hotspot.
[@RefWorks:228] have classified 3C 236 as a double-double [@Saikia2009; @Orru2015] radio galaxy, since the restarted activity in the core has extended emission aligned with the large-scale lobes. Then, 3C 236 may be a “triple-double” radio galaxy; the inner hotspot in the NW lobe signifying a stall in the jet, or a short sputter in the accretion episode responsible for the creation of the large-scale lobes. In this view, the diffuse outer region of the NW lobe is the (still active) remnant hotspot of the previous activity episode and the embedded hotspot is produced by the jet expelled during the re-activation episode, which is still advancing through the lobe material. Within this context, the wiggle noticeable in the source morphology (mentioned above) can be explained by a slight shift in the jet axis during the jet stall/sputter event.
The lobe length asymmetry and the position of the hotspots may be caused by a difference in material density on the opposite sides of the host galaxy, at the position of the lobes, and higher for the NW lobe. This is tentatively supported by the particle density we have derived, presented in Table \[table:fluxes\], which is $ \sim 3 $ times higher than the medium density obtained by [@Mack1997] and hence broadly comparable with their result.
Owing to their sizes, GRGs can be used as probes of the physical conditions of the IGM. [@Malarecki2013] have performed such a study on a sample of 19 GRGs; we are in agreement with the values they have derived for the mean lobe pressures in their sample (ranging from $ 1.34\times10^{-15} $ to $ 1.91\times10^{-14} $ Pa). In a subsequent study of the same sample @Malarecki2015 find that GRGs tend to occur in sparse environments (such as the one of the 3C 236 host galaxy), and they show tentatively that shorter lobes expand in regions of (on average) higher galaxy density. This may be relevant to explain the lobe morphology of 3C 236. Further studies on the immediate environment of the host galaxy of 3C 236 should test this hypothesis. If true, the environment should be denser to the north-west, where the shorter lobe extends. On the other hand, the large-scale asymmetry and the (reverse) small-scale asymmetry may have a physical origin in asymmetric host galaxy properties.
Recent studies of the GRG NGC 6251 by Cantwell et al. (submitted) have found ages for its lobes of less than 50 Myr, and show that the newly discovered faint lobe extensions have ages greater than 200 Myr. The radiative ages for the lobes of 3C 236 we derive fall in between of the ages derived for the different regions of NGC 6251. Given the morphological difference between the lobes of these two GRGs (the lobes of NGC 6251 are far less confined, and it is an FRI radio galaxy), the results of the studies are consistent. The lobe pressure values they find ($ 4.9\times10^{-16} $ to $ 4.8\times10^{-13} $ Pa) are also consistent with our findings. Our derivations of the lobe ambient medium assume that the lobes are in pressure balance with the IGM. One may find this assumption objectionable, as radio galaxy lobes are often observed to be under-pressured. However, [@Croston2014] have argued that FRIs can be in pressure balance if there is entrainment of surrounding material by the jet flow. Recent reports that FRI lobes are energetically dominated by protons [@Croston2018] seem to support the entrainment scenario. Similarly, [@harwood2016; @harwood2017b] argue that FRIIs can be brought back into agreement by considering the steeper than expected injection index which is sometimes derived using model fits to data from low-frequency observations.
Our 3C 236 spectral index map, with the highest spatial resolution obtained so far at these frequencies, allows us for the first time to associate morphological and spectral features. We clearly see a flatter (compared to the surrounding regions) spectral index associated with the inner hotspot of the NW lobe, hinting at it being a particle acceleration region. Also, while previous spectral index maps (Figure 3, top panel in [@RefWorks:148]) only weakly hinted at the spectral index steepening toward the lobe edges, we can now better trace that transition. The curious flattening of the spectral index in the inner SE lobe which was hinted at previously [@RefWorks:148], now stands out. It can be a signature of the interaction between the jet inflating the SE lobe and the IGM at that position; the spectral index indicating that acceleration is ongoing.
In general, the spectral ages we have derived do not agree with the values published by [@RefWorks:148]; these authors obtain lower age estimates (ranging from less than $ 8 $ Myr to $ 20 $ Myr). The only exception is region two, where our age is broadly comparable with their estimate (they have an age of around $ 20 $ Myr for that region of the source, while our value is around $ 50 $ Myr, both using the JP model). The ages we derive measuring the integrated flux density of the lobes are substantially higher than what [@RefWorks:148] derive. The fact that the age for region eight using the single-injection JP model is comparable with the age for the NW lobe derived using the CI model, suggests that our ages are a more robust measure of the characteristic age compared to previous studies. The break-frequencies in our model lobe spectra are located in the lower frequency range of the data used by [@RefWorks:148], which is consistent with the fact that we measure flatter spectral indices in the lobes. LOFAR can trace the spectral flattening towards still lower frequencies, and thus characterize the spectral break.
The age difference between these studies is most likely due to the fact that LOFAR measures the emission from the oldest particle population, affecting our estimates. It should also be noted that due to the uncertainties in the assumed values of the input parameters (especially the magnetic field value; [@RefWorks:148] use values between 0.3 $ \mu $G and 0.7 $ \mu $G), uncertainties in the model, mapping resolution as well as the sparse frequency coverage, these ages should be taken as limits to the actual values.
Our age estimates support a scenario where the lobes are built up by a jet advancing with a speed of around $ 0.1\mathrm{c} $ [as argued by @RefWorks:130], i.e. that speed is required to inflate the lobes to their present linear size in a time broadly consistent with their derived ages (overall ages based on the CI model). Further, as was already mentioned in the introduction section of this work, [@RefWorks:228] suggest (based on their HST studies of star formation in the nucleus of the host galaxy) an age of the large-scale lobes in the range of $ 10^{8} $ to $ 10^{9} $ years, in line with our findings.
Conclusion {#con}
==========
We have presented new LOFAR observations of the GRG 3C 236. We have studied this radio galaxy for the first time at a resolution of up to $ 6\arcsec $ at 143 MHz. Also, we have derived the highest resolution spectral index maps to date (at $ 50 \arcsec $ resolution). Our main conclusions are:
- We observe an inner hotspot in the north-western lobe, separate from its more diffuse outer region. It is also discernible in the spectral index map, as a region undergoing more recent particle acceleration (flatter spectral index values). This detection, taken together with the overall source morphology, may be an indication of a short interruption of the accretion episode/jet sputter.
- The brighter component of the SE lobe double hotspot is resolved in two components, making this feature a triple hotspot.
- The source energy / pressure balance with the IGM suggests that confinement by the IGM may be responsible for the morphology of the lobes; the NW lobe is probably confined and the SE one has expanded in a lower density medium, reflected in the somewhat steeper spectrum of its outer region / northern edge.
- The derived spectral ages are consistent with a jet advancing at $ 0.1 $c in the surrounding medium of the host galaxy.
LOFAR is a valuable instrument for studies of giant radio sources. Its sensitivity to low surface brightness features, combined with its capability for high resolution imaging at low frequencies, offers an unprecedented detailed view of source emission regions containing low energy plasma. This is useful to uncover previously unknown features even in targets which have been studied for decades, such as 3C 236.
LOFAR, the Low Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope (ILT) foundation under a joint scientific policy. We would like to thank Karl-Heinz Mack for providing fits images for the previously published WSRT map. RM gratefully acknowledges support from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Advanced Grant RADIOLIFE-320745. MB acknowledges support from INAF under PRIN SKA/CTA ‘FORECaST’ GJW gratefully acknowledges support from the Leverhulme Trust. SM acknowledges funding through the Irish Research Council New Foundations scheme and the Irish Research Council Postgraduate Scholarship scheme. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\
This research has made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2019).
[^1]: We assume a flat $\Lambda$CDM cosmology with $ H_{0} $ = 67.8[$\,$km$\,$s$^{-1}$Mpc$^{-1}$]{}and $ \Omega_{m} $ = 0.308, taken from the cosmological results of the full-mission Planck observations , and use the cosmology calculator of [@Wright2006]. At the redshift of 3C 236, 1$\arcsec$ = 1.8 kpc.
[^2]: NVSS stands for the NRAO VLA Sky Survey carried out at a frequency of 1.4 GHz [@RefWorks:139]
[^3]: http://www.askanastronomer.co.uk/brats/
[^4]: $ \mathrm{p}_{\mathrm{l}} = (\gamma - 1)\mathrm{e}_{\mathrm{eq}} $, where $ \mathrm{e}_{\mathrm{eq}} = \mathrm{E}_{\mathrm{eq}} / \mathrm{V} $ is the lobe energy density and $ \gamma $ is the ratio of specific heats; for relativistic gas $ \gamma = \frac{4}{3} $
|
---
abstract: 'Motivated by recent progress on field-induced phase transitions in quasi-one-dimensional quantum antiferromagnets, we study the phase diagram of $S=1/2$ antiferromagnetic Heisenberg chains with Ising anisotropic interchain couplings under a longitudinal magnetic field via large-scale quantum Monte Carlo simulations. The interchain interactions is shown to enhance longitudinal spin correlations to stabilize an incommensurate longitudinal spin density wave order at low temperatures. With increasing field the ground state changes to a canted antiferromagnetic order until the magnetization fully saturates above a quantum critical point controlled by the $(3+2)$D XY universality. Increasing temperature in the quantum critical regime the system experiences a fascinating dimension crossover to a universal Tomonaga-Luttinger liquid. The calculated NMR relaxation rate $1/T_1$ indicates this Luttinger liquid behavior survives a broad field and temperature regime. Our results determine the global phase diagram and quantitative features of quantum criticality of a general model for quasi-one-dimensional spin chain compounds, and thus lay down a concrete ground to the study on these materials.'
author:
- Yuchen Fan
- Jiahao Yang
- Weiqiang Yu
- Jianda Wu
- Rong Yu
title: ' Phase diagram and quantum criticality of Heisenberg spin chains with Ising-like interchain couplings – Implication to YbAlO$_3$ '
---
[*Introduction. *]{} In low-dimensional correlated electron systems strong quantum fluctuations give rise to quantum phase transitions (QPTs) [@Sachdev_book:2011] and a number of exotic quantum phenomena, such as unconventional superconductivity [@Lee_RMP:2006; @Cao_Nat:2018], non-Fermi liquid behavior [@Stewart_RMP:2001; @Loehneysen_JLTP:2010], and quantum spin liquids [@Zheng_PRL:2017]. In the past decade, tremendous progresses have been made in understanding the nature of QPTs and associated emerging phenomena in quasi-one-dimensional (Q1D) antiferromagnets. These include the $E_8$ symmetry [@Zam_E8:1989; @Coldea_Sci:2010; @Wu_E8:2014], many-body string excitations [@Wang_Nat:2018; @Wang_PRL:2019] and novel quantun criticality [@Faure_NP:2018; @Cui_PRL:2019] in transverse field Ising chains, and Bose-Einstein condensation (BEC) and glassy phases in coupled antiferromagnetic (AFM) chains [@Zapf_RMP:2014; @Yu_Nat:2012].
As a paradigmatic model for 1D quantum antiferromagnets, the $S=1/2$ Heisenberg chain is well described by a Tomonaga-Luttinger liquid (TLL), where both the longitudinal and transverse spin correlation functions follow algebraic decay.[@Giamarchi_book:2004] Under a magnetic field, the staggered transverse correlations are always dominant over the longitudinal ones, and a canted AFM order with staggered transverse correlations (denoted as the TAF order) is stabilized when interchain couplings become relevant. In systems with an Ising anisotropy, besides the TAF phase which arises from a spin-flop mechanism [@Fisher_PRL:1974], the peculiar quantum fluctuations in the Ising anisotropic XXZ chain give rise to incommensurate modulation of the longitudinal spin correlations [@Haldane_PRL:1980] and can stabilize an incommensurate longitudinal spin density wave (LSDW) order [@Okunishi_PRB:2007]. This LSDW state has been recently observed in several Q1D antiferromagnets [@Kimura_PRL:2008; @Itoh_PRL:2008; @Grenier_PRB:2015; @Klanjsek_PRB:2015; @Shen_NJP:2019].
Recent inelastic neutron scattering (INS) measurements reveal quantum critical TLL behavior of a coupled $S=1/2$ chain compound YbAlO$_3$ with nearly isotropic (Heisenberg) intrachain exchange couplings [@Wu_NC:2019]. A surprising observation is an incommensurate AFM state induced by the applied magnetic field. In this phase, the modulation of the ordering wave vector is proportional to the magnetization, which is a characteristic of the LSDW order in coupled Ising anisotropic XXZ chains. This leads to the puzzle on the origin of the incommensurate AFM order in Heisenberg chains. A clue from both experimental observation and theoretical analysis is the relevance of the Ising anisotropic interchain coupling [@Wu_NC:2019; @Agrapidis_PRB:2019]. Still a generic open question is that how the interchain Ising anisotropy would affect the phase diagram and low-energy excitations of Heisenberg chains. This poses a major challenge to existed theories based on the interchain mean-field approximation where the interchain fluctuations are neglected [@Okunishi_PRB:2007].
To tackle these issues, in this letter we study the field-induced phase diagram of $S=1/2$ Heisenberg chains with Ising anisotropic interchain couplings by using large-scale quantum Monte Carlo (QMC) simulations. Our results unambiguously show that the interchain interactions enhance longitudinal spin correlations to stabilize an incommensurate LSDW. With increasing field, the ground state transforms to a TAF state, and is fully polarized above a quantum critical point (QCP) controlled by the (3+2)D XY universality. Increasing temperature from the QCP, the scaling of thermal energy and NMR relaxation rate demonstrate that the system experiences a clear dimension crossover to the universal TLL behavior, exhibiting rich physics and fine structure of the quantum criticality. We then propose NMR measurements as a means to probe the ground states and related low-energy excitations in YbAlO$_3$ and other Q1D antiferromagnets.
[*Model and method. *]{} We consider a model defined on a three-dimensional (3D) cubic lattice for the $S=1/2$ Heisenberg spin chains with weak interchain couplings of the XXZ type under a longitudinal ($z$-direction) magnetic field. The Hamiltonian reads as $$\begin{aligned}
\label{Eq:Ham}
H &=& {J_c}\sum\limits_{i} {{{\vec S}_i} \cdot {{\vec S}_{i+c}}} - g{\mu _B}H\sum\limits_i {S_i^z} \nonumber\\
&+& {J_{ab}}\sum\limits_{i,\delta=\{a,b\}}
\left[\varepsilon \left( {S_i^xS_{i+\delta}^x + S_i^yS_{i+\delta}^y} \right)
+ S_i^zS_{i+\delta}^z\right].\end{aligned}$$ Here $\vec{S}_i=\{S^x_i,S^y_i,S^z_i\}$ is an $S=1/2$ spin operator defined on site $i$. $J_c$ and $J_{ab}$ are respectively the intrachain (along $c$ axis) and interchain exchange couplings between the nearest neighbor spins. $\varepsilon$ is a parameter characterizing the spin anisotropy of the interchain coupling. $g$ is the gyromagnetic factor, ${\mu _B}$ is the Bohr magneton, and $H$ is the applied magnetic field. We take $J_c$ as the energy unit and define the reduced temperature $t=T/J_c$ and reduced field $h=g\mu_B H/J_c$. For Ising anisotropic interchain interaction, $\varepsilon<1$. Here we take $\varepsilon = 0.25$ and $J_{ab} = 0.2 J_c$ for demonstration. The effects of $\varepsilon$ and $J_{ab}$ on the phase diagram of the system will be discussed later. To study the model in Eq. we perform numerically exact quantum Monte Carlo (QMC) simulations based on the stochastic series expansion (SSE) algorithm [@Syljuasen_PRE:2002; @Alet_PRE:2005]. In the simulations, the largest system size is $32 \times 32 \times 256$ and the lowest temperature accessed is $t=0.003$.
![(a): Sketch of spin patterns along a chain in low-temperature phases. (b): Longitudinal spin structure factor at various fields. The splitting of the peak signals the LSDW order. (c): Relation between the incommensurate ordering wave vector and the magnetization in the LSDW phase. (d): Thermal phase diagram of the model in Eq. obtained by QMC simulations. Filled circles denote the phase boundaries. The order-disorder transition at each field is determined by the peak position of the temperature dependent specific heat data (Fig.S1 [@SM]), while transitions between ordered phases are determined by the change of ordering wave vectors in calculated spin structure factors. Also shown are the adapted experimental phase boundary data (open squares) for YbAlO$_3$, taken from Ref. [@Wu_NC:2019]. The filled and open triangles respectively show the calculated and adapted experimental crossover temperatures in the disordered regime close to the QCP at $h_c$. The dashed lines are linear fits.[]{data-label="Fig:1"}](Fig1.pdf){width="8.5cm"}
[*Phase diagram and the LSDW phase. *]{} Our main results are summarized in the phase diagram of Fig. \[Fig:1\](d). At low temperatures, three ordered phases appear sequently with increasing field, and the spin patterns along a chain in these phases are illustrated in Fig. \[Fig:1\](a). An Ising AFM phase with ordered moments aligned in the $z$ direction is stabilized for $h<h_1\approx0.6$, and a LSDW state with incommensurate longitudinal spin correlations is stabilized for field regime $h_1<h<h_2\approx0.89$. For $h>h_2$ the ground state becomes a TAF, which is a canted AFM state with staggered transverse spin correlations. Further increasing the field, the spins become fully polarized for $h>h_c\approx2.50$. The QPT at $h_c$ is continuous, while the transitions associated with the LSDW order at $h_1$ and $h_2$ are both first-order.
![ (a): Phase boundary (blue triangles) and crossover (red circles) near the QCP. The red solid line is a power-law fit to $t\sim (h_c-h)^{2/3}$, and the dahsed lines are linear fits. The color scheme shows a 1D-3D crossover in the quantum critical regime. (b): Scaling of the critical fields $h_c(t)$ near the QCP, determined from susceptibility and correlation length data. The line is a fit $h_c-h_c(t)\sim t^{3/2}$.[]{data-label="fig:2"}](Fig2.pdf){width="8.5cm"}
To examine the nature of the ordered states, we calculate the normalized longitudinal and transverse spin structure factors $$\begin{aligned}
&& \mathcal{S}^{zz} (\mathbf{q}) = \frac{1}{N^2} \sum\limits_{ij} {e^{i\mathbf{q} \cdot ( \mathbf{r}_i - \mathbf{r}_j)}\left\langle S_i^z S_j^z \right\rangle }, \label{Eq:Szz} \\
&& \mathcal{S}^{xy} (\mathbf{q}) = \frac{1}{2N^2} \sum\limits_{ij} {e^{i\mathbf{q} \cdot ( \mathbf{r}_i - \mathbf{r}_j)}\left\langle S_i^x S_j^x + S_i^y S_j^y \right\rangle }. \label{Eq:Sxy}\end{aligned}$$ The Ising AFM order is signaled by a peak of $\mathcal{S}^{zz}(\mathbf{q})$ at $\mathbf{q}=(\pi,\pi,\pi)$. When $h>h_1$ we find that the peak splits into two located at incommensurate $\mathbf{q}=(\pi,\pi,\pi\pm\Delta Q)$ (Fig. \[Fig:1\](b)). In this incommensurate AFM phase the ordering wave vector varies with increasing field, satisfying $|\Delta Q | = 2\pi\langle {m^z} \rangle$ (Fig. \[Fig:1\](c)), a characteristic reflecting the Q1D TLL physics of the LSDW state [@Okunishi_PRB:2007]. This confirms that the incommensurate order is indeed a LSDW, which in this model arises from the enhancement of longitudinal correlations by interchain Ising anisotropy. For $h>h_2$, the peak of $\mathcal{S}^{zz}$ is suppressed and the ground state changes to the TAF with a peak of $\mathcal{S}^{xy}(\mathbf{q})$ at $\mathbf{q}=(\pi,\pi,\pi)$, as shown in Fig.S2 of Supplemental Material (SM) [@SM].
![(a): Temperature evolution of the thermal energy at $h_c$. The solid and dahsed lines are power-law fits $E\sim t^{\phi_E}$ with $\phi_E=5/2$ and $\phi_E=3/2$, respectively. (b): Windowing estimate on $\phi_E$ showing a 1D-3D crossover. (c): Scaling of susceptibility showing effective 1D quantum critical TLL behavior at high temperatures using $h<h_c$ data. (d): Scaling of susceptibility showing the $(3+2)$D nature of the QCP at low temperatures using $h>h_c$ data.[]{data-label="fig:3"}](Fig3.pdf){width="8.5cm"}
[*Quantum criticality.*]{} The QPT at $h_c$ takes place when the TAF order is suppressed. Since the TAF order breaks the spin $U(1)$ symmetry, the transition can be viewed as a magnetic BEC [@Giamarchi_NP:2008; @Zapf_RMP:2014] with a dynamical exponent $z=2$. The QCP then belongs to the $(3+2)$D XY universality class. To check this, we first study the scaling behavior of critical field $h_c(t)$ at low temperatures. As shown in Fig. \[fig:2\], $h_c(t)$ data determined from either the peak of field dependent susceptibility $\chi^{zz}(h)=\partial m^z/\partial h$ or the correlation length $\xi\sim L$ (Fig.S3 and Fig.S4 [@SM]) follow the scaling relation of $3d$ BEC, $h_c-h_c(t)\sim t^{d/2}=t^{3/2}$. We then study the finite-temperature crossover in the vicinity of the QCP. At each field the temperature dependent susceptibility $\chi^{zz}(t)$ develops a broad peak, and the peak position defines the crossover temperature $T_{cr}$ (Fig.S4 [@SM]). Near a QCP, $T_{cr}\sim |h-h_c|^{\nu z}$, where $\nu$ is the correlation length exponent. From Fig. \[fig:2\](a) we find that $T_{cr}\sim |h-h_c|$ on both sides of the QCP, consistent with the $(3+2)$D XY universality $z=2$ and $\nu=1/2$.
Owing to its Q1D structure, the system exhibits finite temperature 1D-3D crossover. This is clearly shown in the scaling of thermal energy, $E=\langle H \rangle/N$, right at the critical field $h_c$. Since $dE/dt=C\sim t^{d/z}$, where $C$ is the specific heat, we expect $E\sim t^{\phi_E}$ with $\phi_E=d/z+1=5/2$. But as shown in Fig. \[fig:3\](a), this scaling fits only at low temperatures for $t\lesssim 0.06$. And for $t\gtrsim 0.2$, $\phi_E\approx3/2$, implying an effective dimension $d_{eff}=1$. Careful windowing analysis [@Sebastian_PRB:2005] in Fig. \[fig:3\](b) finds a gradual increase of $\phi_E$ from about $3/2$ to $5/2$ for $0.06\lesssim t \lesssim 0.2$, clearly indicating a 1D-3D crossover in this temperature regime. The 1D-3D crossover gives rise to rich quantum scaling behaviors. For example, the genuine 3D nature of the QCP is inherent in the low-temperature scaling of susceptibility data in the disordered phase (Fig. \[fig:3\](d)), which satisfies $\chi^{zz} \sim |h-h_c|^{\nu (d+z) -2} X \left( \frac{t}{|h-h_c|^{\nu z}}\right)$ with $d=3$, $\nu=1/2$, and $z=2$. In the quantum critical regime it is expected that $\chi^{zz} \sim |h-h_c|^{d/z+1-2/\nu z} \tilde{X} \left( \frac{|h-h_c|}{t^{1/\nu z}}\right)$ with the same exponents at low temperatures. But the scaling of QMC data above the dimension crossover temperature in Fig. \[fig:3\](c) are consistent with $d_{eff}=1$, $\nu=1/2$, and $z=2$, characterizing a quantum critical TLL behavior. [*NMR relaxation rate $1/T_1$ and the TLL behavior.*]{} In the TLL regime of an XXZ chain the spin correlations decay algebraically as $ \langle S_0^z S_r^z \rangle - (m^z)^2 \sim \cos ( 2 k_F r ) r^{-1/\eta} $ and $ \langle S_0^x S_r^x \rangle \sim (-1)^r r^{-\eta} $, where $k_F = \pi (1/2-m^z)$, denoting the Fermi wave number of pseudo-fermions mapped from the spin model by a Jordan-Wigner transformation, and the Luttinger exponent $\eta$ determines the decay rate. For a Heisenberg chain, $\eta<1$ for all fields and the staggered transverse fluctuations are always dominant. On the other hand, for an Ising anisotropic XXZ chain, there is an $\eta$ inversion at field $h_{inv}$, *e.g.* $\eta>1$ for $h<h_{inv}$ where longitudinal fluctuations dominate and $\eta<1$ for $h>h_{inv}$ where the dominant fluctuations turn to transverse ones [@Okunishi_PRB:2007].
![(a): Temperature dependence of the calculated transverse part of the NMR relaxation rate, $1/T_1^{xy}$, at various fields. (b): Longitudinal part $1/T_1^{zz}$. (c): Same as (a) but in double-logarithmic scale, showing TLL behavior above the ordering temperature. The lines are power-law fits. (d): Extracted $\eta$ exponent from $1/T_1^{xy}$ data. $h_{inv}$ is the crossover field where the dominant fluctuation changes from longitudinal ($\eta>1$) to transverse ($\eta<1$) with increasing field. It is found $h_{inv}\approx h_2$, which separates the LSDW and TAF ground states.[]{data-label="fig:4"}](Fig4.pdf){width="8.5cm"}
To examine whether the TLL behavior near $h_c$ extends to lower fields and to determine the dominant spin fluctuations associated with magnetic ordering we calculate the NMR spin-lattice relaxation rate $1/T_1$, which probes the low-energy spin fluctuations of a magnetic system [@Coira_PRB:2016; @Dupont_PRB:2016]. For simplicity we set the nuclear gyromagnetic ratio and the hyperfine coupling to be unity, and define the longitudinal ($zz$) and transverse ($xy$) relaxation rates as $1/T_1^{\alpha\alpha} = \int d\mathbf{q} \Im\chi^{\alpha\alpha}(\mathbf{q},\omega_0)/\beta\omega_0$, where $\beta=1/t$, $\Im\chi^{\alpha\alpha}(\mathbf{q},\omega_0)$ is the imaginary part of the dynamical susceptibility at NMR frequency $\omega_0\rightarrow0$, $\alpha=x,y,z$, and $1/T_1^{xy}=1/T_1^{xx}+1/T_1^{yy}$. To avoid handling the analytical continuation in QMC simulations, we further adopt an approximation [@Randeria_PRL:1992; @Roscilde], $$\label{Eq:T1QMC}
1/T_1^{\alpha\alpha}\approx\frac{2}{\pi t}\sum_i \langle \delta S^\alpha_i(\beta/2) \delta S^\alpha_i(0)\rangle,$$ where $\delta S^\alpha_i=S^\alpha_i-\langle S^\alpha_i \rangle$. As shown in Fig.S5 [@SM], this approximation gives reasonable results for a Heisenberg chain.
The results for the 3D model are shown in Fig. \[fig:4\](a) and (b). At $h=0.7$ where the ground state is a LSDW, the temperature dependent $1/T_1^{zz}$ develops a prominent peak at the ordering temperature, signaling enhanced critical fluctuations. But $1/T_1^{xy}$ only shows a kink at the transition and is significantly suppressed in the ordered phase. At higher fields where the ground state is the TAF, the peak feature is seen in $1/T_1^{xy}$, and $1/T_1^{zz}$ drops rapidly in the ordered phase. Above the transition we find an algebraic temperature dependence of $1/T_1^{xy}$ (Fig. \[fig:4\](c)), signaling a TLL behavior. In a TLL $1/T_1^{xy}\sim T^{\eta-1}$ according to bosonization results [@Bouillot_Thesis:2011]. Fitting to this function, we extract the $\eta$ parameter at each field, as shown in Fig. \[fig:4\](d). Surprisingly $\eta>1$ for $h\lesssim0.85$, indicating dominant longitudinal fluctuations in Heisenberg chains. With increasing field, $\eta$ decreases monotonically, and an $\eta$ inversion takes place at $h_{inv}\approx 0.85$. Interestingly, $h_{inv}\approx h_2$, which separates the LSDW and TAF ground states. The $\eta$ inversion and the peak feature of $1/T_1$ at transition indicate that the condensation of the dominant fluctuations leads to the corresponding type of magnetic order in this system.
[*Discussions and Conclusion. *]{} Our QMC results provide the first clear evidence of a LSDW phase in a 3D model. The calculated $1/T_1$ data unambiguously show that this phase is stabilized by the enhanced incommensurate longitudinal fluctuations in Heisenberg chains. The enhancement of the longitudinal fluctuations originates from the interchain Ising anisotropy and this effect is beyond the conventional mean-field scenario [@Okunishi_PRB:2007] in which the dynamical effects of the interchain couplings are ignored so that the transverse fluctuations always dominate.
When $h>h_{inv}$ the dominant spin fluctuations become transverse and the TAF ground state is correspondingly stabilized. The transverse fluctuations also govern in the quantum critical regime where the system shows quantum critical TLL behavior at intermediate temperatures and converges to the $(3+2)$D XY universality at low temperatures. Such a scenario of quantum criticality generally holds for a broad class of weakly coupled XXZ spin chain systems that have the same symmetry as the model in Eq. .
For the model studied here, we find that the phase diagram is sensitive to the interchain coupling parameters $\varepsilon$ and $J_{ab}$. It is known that increasing $J_{ab}$ favors a TAF order owing to the spin-flop mechanism [@Fisher_PRL:1974]. For fixed $J_{ab}$, the enhancement of longitudinal correlations and hence the stabilization of LSDW only take place when $\varepsilon$ is less than a critical value. As illustrated in Fig.S6 [@SM] the LSDW is absent for $\varepsilon\gtrsim0.5$ at $J_{ab}/J_c=0.2$. When $\varepsilon\rightarrow0$, however, the transverse correlations are only within each chain. Because the TAF order breaks a continuous $U(1)$ symmetry, it can not be stabilized in this limit. In this case the QPT is from the LSDW to the fully polarized phase and belongs to the $(3+1)D$ universality. But for a finite $\varepsilon$ we always find a TAF phase before the magnetization is fully saturated (see Fig.S6 [@SM]). Hence the LSDW is irrelevant to the QPT, which is always controlled by the $(3+2)$D XY universality.
In what follows we discuss implications of our results to YbAlO$_3$ and other related Q1D quantum magnets. It is found that the intrachain exchange coupling of YbAlO$_3$ is almost isotropic but the interchain one is dominant by dipole-dipole interaction containing strong Ising anisotropy [@Wu_NC:2019; @Wu_arXiv:2019]. This is fully captured by the Hamiltonian in Eq. , where the interchain Ising anisotropy is ensured by the finite $\varepsilon<1$. Taking the measured values $J_c\sim0.22$ meV and $g\sim7.6$ [@Wu_NC:2019], we can compare our results with experimental ones for YbAlO$_3$. As shown in Fig. \[Fig:1\](d), the phase boundary of the model agrees qualitatively with the adapted experimental one. In particular, the LSDW state in our model naturally explains the observed unusual incommensurate AFM order. Note that INS measurement suggests a ferromagnetic interchain coupling [@Wu_NC:2019]. Though for demonstration we take $J_{ab}>0$, the stabilization of the LSDW phase and the agreement on the phase boundary indicate that our model has already captured the essential physics of the system, and the results for $J_{ab}<0$ are qualitative the same. For the quantum criticality, the linear scaling of $T_{cr}$ and the quantum critical TLL behavior obtained in our theory are also observed in YbAlO$_3$ [@Wu_NC:2019], while the predicted genuine $(3+2)$D XY universality, which also well applied to other coupled XXZ spin chains such as (Ba,Sr)V$_2$Co$_2$O$_8$, can be tested by future experiments.
Our theory also predicts a TAF order for $h>h_2$. In real materials, the LSDW and TAF orders may coexist or are phase separated [@Grenier_PRB:2015; @Bera_SCVO:2019]. This possibility and the strong anisotropic gyromagnetic tensor in YbAlO$_3$ complicates the detection of transverse spin correlations and the related TAF order by INS measurements [@Wu_NC:2019]. In light of the theoretical results, we hereby propose to probe the dominant low-energy spin fluctuations and associated magnetic ordering by measuring NMR $1/T_1$. Even when the system is highly anisotropic such that only $1/T_1^{zz}$ or $1/T_1^{xy}$ is detectable, the peak or kink feature in the temperature dependent $1/T_1$ near the ordering temperature can still tell the dominant fluctuations and the associated magnetic order, as shown in Figs. \[fig:4\](a) and (b). Moreover, it would be interesting to examine the possible $\eta$ inversion by a careful study on the $1/T_1$ data above the ordering temperature. Such a study can also provide important information on the dominant fluctuations and the underlying magnetic ground state. Given the universal property of TLL, similar analysis can be applied on other related Q1D systems, such as (Ba,Sr)V$_2$Co$_2$O$_8$, where the dominant fluctuations and associated long-range magnetic orders are still under debate [@Klanjsek_PRB:2015; @Grenier_PRB:2015].
In conclusion, we study the field-induced phase diagram and quantum criticality of $S=1/2$ AFM Heisenberg chains with Ising anisotropic interchain couplings. We find the interchain Ising anisotropy enhances incommensurate AFM correlations, stabilizing a LSDW ground state at low fields. The transverse spin correlations dominate at high fields and a TAF ground state is stabilized. This leads to a QCP controlled by a $(3+2)$D XY universality but displaying a finite-temperature 1D-3D crossover. The calculated NMR relaxation rates show enhanced critical fluctuations at magnetic ordering, and the enhancement takes place at particular channel relevant to the underlying magnetic order. Above the ordering temperature, the system exhibits universal TLL behavior and shows an $\eta$ inversion with increasing field, where the dominant spin fluctuation changes from longitudinal to transverse. These features make NMR an ideal probe for the spin fluctuations and associated ground states of coupled spin chains. Our findings thus shed light on future experimental and theoretical studies on YbAlO$_3$ and other Q1D quantum magnets.
[*Acknowledgments.*]{} We thank useful discussion with L. S. Wu. This work was supported by the Ministry of Science and Technology of China (Grant No. 2016YFA0300504), the National Natural Science Foundation of China (Grants No. 11674392, and 51872328), the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China (Grant No. 18XNLG24), the Science and Technology Commission of Shanghai Municipality Grant No. 16DZ226020, and the Outstanding Innovative Talents Cultivation Funded Programs of Renmin University of China. R.Y. acknowledges hospitality at ENS de Lyon, France. J.W. acknowledges additional support from a Shanghai talent program.
[99]{} S. Sachdev, [*Quantum Phase Transitions*]{} (Cambridge University Press, Cambridge, England, 2011).
P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. [**78**]{}, 17 (2006).
Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature [**556**]{}, 43 (2018).
G. R. Stewart, Rev. Mod. Phys. [**73**]{}, 797 (2001).
Special issue on Quantum Phase Transitions, J. Low Temp. Phys. [**161**]{} (2010). J. Zheng, K. Ran, T. Li, J. Wang, P. Wang, B. Liu, Z. Liu, B. Normand, J. Wen, and W. Yu, Phys. Rev. Lett. [**119**]{}, 227208 (2017).
A. B. Zamolodchikov, Int. J. Mod. Phys. A. [**04**]{}, 4235 (1989).
R. Coldea, D. A. Tennant, E. M. Wheeler, E. Wawrzynska, D. Prabhakaran, M. Telling, K. Habicht, P. Smeibidl, K. Kiefer, Science [**327**]{}, 177 (2010).
J. Wu, M. Kormos, and Q. Si, Phys. Rev. Lett. [**113**]{}, 247201 (2014).
Z. Wang [*et al.*]{}, Nature [**554**]{}, 219 (2018).
Z. Wang [*el al.*]{}, Phys. Rev. Lett. [**123**]{}, 067202 (2019).
Q. Faure [*et al.*]{}, Nat. Phys. [**14**]{}, 716 (2018).
Y. Cui [*et al.*]{} Phys. Rev. Lett. [**123**]{}, 067203 (2019).
V. Zapf, M. Jaime, and C. D. Batista, Rev. Mod. Phys. [**86**]{}, 563 (2014).
R. Yu [*et al.*]{}, Nature [**489**]{}, 379 (2012).
T. Giamarchi, [*Quantum Physics in One Dimension*]{} (Oxford Univ. Press, Oxford, 2004).
M. E. Fisher and D. R. Nelson, Phys. Rev. Lett. [**32**]{}, 1350 (1974).
F. D. M. Haldane, Phys. Rev. Lett. [**45**]{}, 1358 (1980). K. Okunishi and T. Suzuki, Phys. Rev. B [**76**]{}, 224411 (2007). S. Kimura, T. Takeuchi, K. Okunishi, M. Hagiwara, Z. He, K Kindo, T. Taniyama, and M. Itoh, Phys. Rev. Lett. [**100**]{}, 057202 (2008).
S. Kimura, M. Matsuda, T. Masuda, S. Hondo, K. Kaneko, N. Metoki, M. Hagiwara, T. Takeuchi, K. Okunishi, Z. He, K. Kindo, T. Taniyama, and M. Itoh, Phys. Rev. Lett. [**101**]{}, 207201 (2008). B. Grenier, V. Simonet, B. Canals, P. Lejay, M. Klanjšek, M. Horvatić, and C. Berthier, Phys. Rev. B [**92**]{}, 134416 (2015). Anup Kumar Bera, Jianda Wu, Wang Yang, Zhe Wang, Robert Bewley, Martin Boehm, Maciej Bartkowiak, Oleksandr Prokhnenko, Bastian Klemke, A. T. M. Nazmul Islam, Joseph Mathew Law, Bella Lake, submitted (2019).
M. Klanjšek, M. Horvatić, S. Krämer, S. Mukhopadhyay, H. Mayaffre, C. Berthier, E. Canévet, B. Grenier, P. Lejay, and E. Orignac, Phys. Rev. B [**92**]{}, 060408(R) (2015). L. Shen, O. Zaharko, J. O. Birk, E. Jellyman, Z. He, and E. Blackburn, New J. Phys [**21**]{}, 073014 (2019). L. S. Wu, [*et al.*]{}, Nat. Commun. [**10**]{}, 698 (2019). C. E. Agrapidis, J. van den Brink, and S. Nishimoto, Phys. Rev. B [**99**]{}, 224423 (2019).
O. F. Sylju[å]{}sen and A. W. Sandvik, Phys. Rev. E [**66**]{}, 046701 (2002).
F. Alet, S. Wessel, and M. Troyer, [**71**]{}, 036706 (2005).
See Supplemental Material for additional plots of QMC data on the phase diagram and quantum criticality of the model studied.
E. Coira, P. Barmettler, T. Giamarchi, and C. Kollath, Phys. Rev. B [**94**]{}, 144408 (2016). M. Dupont, S. Capponi, and N. Laflorencie, Phys. Rev. B [**94**]{}, 144409 (2016). T. Giamarchi, C. Rüegg, and O. Tchernyshyov, Nat. Phys. [**4**]{}, 198 (2008).
S. E. Sebastian, P. A. Sharma, M. Jaime, N. Harrison, V. Correa, L. Balicas, N. Kawashima, C. D. Batista, and I. R. Fisher, Phys. Rev. B [**72**]{}, 100404(R) (2005).
M. Randeria, N. Trivedi, A. Moreo, and R. T. Scalettar, Phys. Rev. Lett. [**69**]{}, 2001 (1992). T. Roscilde, private communications.
P. Bouillot, Ph.D. Thesis, Univ. of Geneva, 2011.
L. S. Wu [*et al.*]{}, arXiv:1904.11513 (2019).
SUPPLEMENTAL MATERIAL – Phase diagram and quantum criticality of Heisenberg spin chains with Ising-like interchain couplings – Implication to YbAlO$_3$ {#supplemental-material-phase-diagram-and-quantum-criticality-of-heisenberg-spin-chains-with-ising-like-interchain-couplings-implication-to-ybalo_3 .unnumbered}
========================================================================================================================================================
![(Color online) Temperature dependence of specific heat $C$ at various field values, which is used to determine the phase boundary in Fig. 1 of the main text. At each field, the transition to an AFM state is signaled as either a peak or a kink feature. []{data-label="Sfig:1"}](FigS1.pdf){width="12.0cm"}
![(Color online) $q$ dependence of the longitudinal \[in (a)\] and transverse \[in (b)\] spin structure factors, $\mathcal{S}^{zz}(\pi,\pi,\pi+q)$ and $\mathcal{S}^{xy}(\pi,\pi,\pi+q)$, respectively, at $t=0.05$ and $h=0.9$ in the TAF phase.[]{data-label="Sfig:2"}](FigS2.pdf){width="12.0cm"}
![(Color online) Finite-size scaling of the correlation length along the $c$ axis, $\xi_c$, at transition to the TAF phase. The determined critical field $h_c$ is plotted in Fig. 3(b) of the main text, and the extracted correlation length exponent $\nu$ agrees with the value of 3D XY universality within error bar.[]{data-label="Sfig:3"}](FigS3.pdf){width="12.0cm"}
![(Color online) (a): Field dependence of susceptibility $\chi^{zz}$ at low temperatures, where the peak determines the critical field $h_c(t)$. (b): Temperature dependence of $\chi^{zz}$ above the ordering temperatures. The peak position (pointed by an arrow) determines the crossover temperature $T_{cr}$ in Fig. 1(d) and Fig 2(a) of the main text.[]{data-label="Sfig:4"}](FigS4.pdf){width="12.0cm"}
![(Color online) Temperature dependence of the calculated transverse part of the NMR relaxation rate, $1/T_1^{xy}$, for a single Heisenberg chain at $h=0$ and $h=1$, respectively. The lines are power-law fits from analytical results in Refs. [@Coira_PRB:2016; @Dupont_PRB:2016].[]{data-label="Sfig:5"}](FigS5.pdf){width="12.0cm"}
![(Color online) Thermal phase diagram of the model in Eq. (1) of the main text for $J_{ab}/J_c=0.2$ and $\varepsilon=0.5$ \[in (a)\] and $\varepsilon=0.2$ \[in (b)\]. No LSDW phase is stabilized at $\varepsilon=0.5$. The TAF phase is stabilized in both cases.[]{data-label="Sfig:6"}](FigS6.pdf){width="12.0cm"}
|
---
abstract: 'We report on spontaneous rotational symmetry breaking in a minimal model of complex macromolecules with branches and cycles. The transition takes place as the strength of the self-repulsion is increased. At the transition point, the density distribution transforms from isotropic to anisotropic. We analyze this transition using a variational mean-field theory that combines the Gibbs-Bogolyubov-Feynman inequality with the concept of the *Laplacian matrix*. The density distribution of the broken symmetry state is shown to be determined by the eigenvalues and eigenvectors of this Laplacian matrix. Physically, this reflects the increasing role of the underlying topological structure in determining the density of the macromolecule when repulsive interactions generate internal *tension* Eventually, the variational free energy landscape develops a complex structure with multiple competing minima.'
author:
- 'Josh Kelly$^{1}$, Alexander Y. Grosberg$^{2}$, and Robijn Bruinsma$^{1,3}$'
bibliography:
- 'thesis2.bib'
title: Generalized Flory Theory for Rotational Symmetry Breaking of Complex Macromolecules
---
It is well known that when attractive interactions between the units (“monomers") of a flexible macromolecule become sufficiently strong, the molecule can undergo a folding transition from a disordered isotropic state to an ordered structure with a specific shape [@pande]. Less familiar is the fact that when repulsive interactions dominate, macromolecules with more complex topologies also can adopt distinct shapes. Examples are dendrimers [@Boris] and certain biopolymers [@ding; @zipper; @gapsys; @AjaykumarGopal2012]. Their shape is determined by competing effects. On the one hand, the combination of thermal fluctuations and short-range repulsive interactions between the monomers favors isotropic swelling, since that maximizes the entropy of the molecule. If, however, the topology of the macromolecule constrains the swelling then this generates internal *tension* along the bonds and this reduces the dominance of thermal fluctuations and entropy. Swollen polymer gels [@RedBook] and polymer brushes in good solvent [@milner] are familiar examples of polymeric systems where swelling induces a tension that both suppresses fluctuations and confers distinct shape. The suppression of thermal fluctuations means that tense macromolecules of this type can be described by mean-field theory [@milner], as opposed to linear polymers that have no internal tension [@RedBook].
Suppose one gradually increases the strength of the repulsions in a soluble macromolecule with a complex topology, is there a well-defined threshold where the molecule develops a distinct shape? If there is such a threshold then what is the nature of the rotational symmetry-breaking transition and how is the resulting shape related to the underlying topology? Finally, if the number of topological constraints is increased, does a complex macromolecule eventually become over-constrained and “frustrated” with a free energy landscape that has multiple competing minima [^1]?
In this paper we propose a theory for the development of shape of topologically complex macromolecules for a minimal model that was introduced by Edwards to describe linear polymers and polymer gels [@ball; @deam] in good solvent (i.e., solutions where repulsive interactions dominate). We construct a generalization of Flory mean-field theory and apply this to the Edwards Hamiltonian. We find that the density distribution of complex branched polymers indeed undergoes a transition where it loses rotational symmetry. The structure of the broken symmetry state is determined by the eigenvalues and eigenvectors of the *Laplacian matrix* of the molecule, a concept borrowed from graph theory. As the strength of the repulsive interactions further increases, a complex energy landscape emerges with multiple competing minima. We find that at least the coarse-grained features of the density distribution of complex macromolecules and the tension profile can be predicted on the basis of the eigenvalues and eigenvectors of the Laplacian matrix.
The Edwards Hamiltonian for a macromolecule is defined by $$\beta H = \frac{d}{2a^2} {\sum_{i<j}}^{\prime}({\bf{r}}_i - {\bf{r}}_j)^2+ \sum_{i<j}u(|{\bf{r}}_i - {\bf{r}}_j|)
\label{ch3:H2}$$ The summations are here over $N$ point-like monomers located at sites ${\bf{r}}_i$ with $i=1,2,..N$ that are linked into a connected network by harmonic springs. The prime in $H_0$, the first term on the right-hand side, indicates that this double sum is to be restricted to monomers pairs that are linked by springs. The second term in Eq.\[ch3:H2\] represents short-range repulsive monomer-monomer interactions with strength $v=\int u(r) d^dr$ and range $\sigma$ in units of $a$. The summation now is over all monomer pairs. The Edwards Hamiltonian is realized by a network of cross-linkers connected by ideal polymer chains that have an RMS radius of gyration $a$.
Assume that the radius of gyration $R_0$ of the molecule for $v=0$ has been determined (we will shortly see how). The radius of gyration for $v\neq 0$ can then be obtained by minimizing the Flory variational free energy [@RedBook] $$\begin{split}
&\beta F_F(R) =\left(\frac{R^2}{R_0^2}\right)+v \frac{N^2}{R^d}
\label{Flory}
\end{split}$$ (dropping numerical coefficients). The first term on the right hand side represents entropic elasticity resisting the swelling while the second term represents osmotic swelling pressure due to monomer-monomer repulsion, as expressed in second viral form. Minimizing $F_F(R)$ with respect to $R^2$ leads to the familiar result that $(R/R_0)^2$ increases as $(v N^2/R_0^3)^{2/5}$ in $d=3$ when the strength of the repulsion increases. Flory theory implicitly assumes a uniform and isotropic density.
In order to allow for an anisotropic density, we first recast the Flory variational energy as a special case of the Gibbs-Bogolyubov-Feynman (GBF) variational principle [@SupMat Section 1], which states that $$F \leq F_T + \left<(H - H_T)\right>_T$$ Here, $\left< \ldots \right>_T$ indicates that a Boltzmann average is to be taken with respect to the trial Hamiltonian $H_T$. $F_T$ is the free energy associated with $H_T$. The variational free energy $F_V=F_T + \left< (H - H_T) \right>_T$ provides an upper bound for the free energy.
For $H_T$, we will use generalized versions of $H_0$ expressed in terms of the eigenvectors and eigenvalues of the $N\times N$ real, square, symmetric Laplacian matrix $L_{i,j}$. The Laplacian matrix is the Laplace operator in matrix form defined on a graph of the nodes and bonds of the molecule. It has been extensively studied in the context of graph theory [@LaplacianNotes]. Diagonal entries $L_{n,n}$ are equal to the number of monomers linked to monomer $n$ (“vertex degree") while off-diagonal entries $L_{n,m}$ are equal to $-1$ if monomer $n$ and $m$ are linked and $0$ if they are not. The rows and columns of $L_{i,j}$ add to zero so the $N$ component vector with entries equal to one is an eigenvector with eigenvalue $0$. The other eigenvalues $\lambda^{(j)}$, with $j=1,2,...N-1$, are strictly positive for a connected graph. The lowest non-zero eigenvalue $\lambda^{(1)}$, henceforth denoted by $\lambda$, is known as the “spectral gap” [^2]. Note that the eigenvalues and eigenvectors of the Laplacian matrix reflect only the topology of the graph of the molecule and do not relate to the geometrical space in which the molecule is embedded.
In terms of the Laplacian matrix $H_0$ can be written by unrestricted summation over all particles in the form $\beta H_0 =\left(\frac{d}{2a^2}\right){\sum\limits_{i,j}}L_{i,j}{\mathbf{r}_i}\cdot{{\mathbf{r}}_j}$. This can be usefully expressed in the form $\beta H_0 =\frac{d}{ 2}\sum\limits_{j=1}^{N-1}\lambda^{(j)}|{\bf{A}}^{(j)}|^2$ where the $\lambda^{(j)}$ are the (rank-ordered) eigenvalues of the Laplacian matrix and where the ${\bf{A}}^{(j)}=\sum\limits_{i=1}^{N-1}{\bf{r}}_{i}\xi_i^{(j)}/a$ are the normal mode amplitudes. The latter are vectors in the $d$-dimensional embedding space but expressed in terms of the orthonormal $N$-component eigenvectors $\xi^{(j)}_j$ of the Laplacian matrix [@Nitta1]. The mode amplitudes can be viewed as analogs of the Fourier amplitudes describing the displacements of the nodes of a graph embedded in a $d$-dimensional space [^3].
The mean square radius of gyration $R_0^{2}$ of the ideal molecule can be expressed in terms of the eigenvalues as $\frac{a^2}{N}\sum_{j=1}^{N-1}{\frac{1}{\lambda^{(j)}}}$ [@Nitta1], which can be viewed as a generalization of the Kramers Theorem [@Rubinstein]. If the spectral gap $\lambda$ is small compared to the higher eigenvalues then this reduces to $\lambda\simeq a^2/(NR_0^2) $. [^4]. An important unphysical feature of $H_0$ is that it has $d$ zero modes (associated with translation symmetry) whereas the correct number of zero modes of a physical molecule – including translation and rotation symmetry – is $d(d+1)/2$ ($3$ in $d = 2$ and $6$ in $d = 3$).
As a first example of the use of the variational method to include monomer-monomer interaction, assume that $H_T$ equals $H_0$ except that the lowest non-zero eigenvalue, the spectral gap $\lambda$, is replaced by a variational parameter $\gamma$. So: $$\beta H_T =\frac{d}{ 2}\gamma|{\bf{A}}^{(1)}|^2+\frac{d}{ 2}\sum_{i=2}^{N-1}\lambda^{(i)}|{\bf{A}}^{(i)}|^2
\label{H0}$$ This leads to a variational free energy: $$\beta F_V(\gamma) \simeq \frac{d}{2}\left(\ln\gamma+\frac{\lambda}{\gamma}\right) + C(N)\frac{v}{a^d} \left( \frac{\gamma d}{2\pi} \right)^{d/2}
\label{Flory}$$ where $C(N)=\sum_{m<n=1}^N \left(({\xi}_m-{\xi}_n)^2+\gamma\sigma^2 \right)^{-d/2}$ with $\xi_m$ the eigenvector associated with the spectral gap $\lambda$ and $\sigma$ the range of the excluded volume interaction in units of $a$ (the derivation is given in [@SupMat Section III]). Using the normalization $\sum_{m=1}^N{\xi}_m^2=1$ and assuming a (large) random structure gives $C(N)\propto N^{2+d/2}$. The resulting variational expression reduces to Flory mean-field theory if one replaces $\gamma$ by $a^2/(NR^2)$.
Next, allow for the possibility of an anisotropic density by including in $H_T$ non-zero expectation values ${\bf{A}}_{0}^{(i)}$ for the $M$ mode amplitudes with the lowest $M$ eigenvalues: $$\beta H_T = \frac{d}{ 2} \left(\sum_{i=1}^M\gamma^{(i)}\left( {\bf{A}}^{(i)}-{\bf{A}}_{0}^{(i)} \right)^2+\sum_{i=M+1}^{N}\lambda^{(i)}|{\bf{A}}^{(i)}|^2\right)
\label{ch3:H4}$$ The special case $\gamma^{(i)}=\lambda^{(i)}$ and $M=N$ is interesting. The set of order-parameters ${\bf{A}}_{0}^i$ then defines a set of $N$ particle vectors ${{\bf{r}}_0}_j/a=\sum_{i=1}^{N-1} {\bf{A}}_{0}^i\xi_j^{(i)}$ (up to an overall translation). Expressing the trial Hamiltonian in real space leads to: $$\beta H_T = \frac{d}{2a^2}{\sum_{i<j}}^{\prime}({\bf{r}}_i - {\bf{r}}_j-\Delta {{\bf{r}}_0}_{ij})^2
\label{GNM}$$ where $\Delta {{{\bf{r}}}_0}_{i,j}=({{\bf{r}}_0}_i-{{\bf{r}}_0}_j)$. This is similar to the Hamiltonian of the ideal molecule except that Gaussian bonds ${\bf{r}}_i - {\bf{r}}_j$ linking monomers $i$ and $j$ have been placed under internal tension so the expectation value of the bond separation $\Delta {{\bf{r}}^0}_{i,j}$ has a certain direction in space. Formally, Eq.\[GNM\] is identical to the Gaussian Network Model that is frequently used to obtain the normal modes of folded proteins [@gaussian]. It also has the appropriate number of zero modes.
Next, include both types of variation. The simplest case is again $M=1$: $$\begin{split}
\beta F_V(\gamma,{\bf{A}}_0) & \simeq \frac{d}{2}\ln\gamma+\frac{d}{2}\lambda\left(|{\bf{A}}_0|^2+\frac{1}{\gamma}\right) \\& + C(N)\frac{v}{a^d} \left( \frac{\gamma d}{2\pi} \right)^{d/2} e^{ -\gamma d{\bf{A}}_0^2/2}
\label{M=1}
\end{split}$$ The function $F_V(\gamma,|{\bf{A}}_0|)$ always has a stable minimum at $|{\bf{A}}_0|=0$, which corresponds to Flory theory, but he surface $F_V(\gamma,{\bf{A}}_0)$ has a second minimum for a large $\gamma$ and a non-zero value of $|{\bf{A}}_0|$. The density distribution described by this minimum has a stretched, linear shape with particle locations determined by the eigenvector $\xi_m$ of the Laplacian matrix associated with the spectral gap. As a function of increasing $v/a^d$, the absolute minimum usually shifts discontinuously from the Flory minimum to the new minimum. An exception is the case of a linear chain when the Flory minimum is the absolute minimum for any $v$.
While the $M=1$ theory is analytically tractable, it can be shown that it only allows for linearly stretched shapes and that it is necessary to include multiple modes to completely lift the overlap between particles [@SupMat Section I]. The variational free energy $F_V(\{\gamma^{(i)},{\bf{A}}_0^{(i)}\})$ for $M$ coupled vectorial order parameters is a natural extension of Eq. \[M=1\] (derived in [@SupMat Section III]) but minimization of $F_V(\{\gamma^i,{\bf{A}}_0^{(i)}\})$ requires numerical methods. Numerical minimization of $F_V(\{\gamma^i,{\bf{A}}_0^{(i)}\})$ for a second-generation dendrimer in $d=2$ showed that for increasing $v/a^d$, there is now a whole series of transitions where modes with increasing eigenvalues “freeze out”. Importantly, the interacting system has the correct number of zero modes in $d=2$. The numerical minimization of $F_V(\{\gamma^i,{\bf{A}}_0^{(i)}\})$ for a 36-node branched graph with a maximum of $M=18$ mode expectation values is shown in Fig. \[MCS\].
{width="3.4in"}
. \[MCS\]
For $v/a^2$ less than about $0.75$, the isotropic Flory minimum was the lowest free energy state, as illustrated by the case $v/a^2 = 0.24$. However, for $v/a^2=1.20$ the density profile is quite anisotropic with three diffuse maxima. The power spectrum of mode amplitudes in this state is dominated by the lowest few eigenvalues. For $v/a^3=2.05$, all of the $M=18$ modes have gained non-zero expectation values and the power spectrum is more complex with a second peak at larger eigenvalues. Note that the density profile is quite detailed. The system appears to be frozen. However, the numerical minimization of the variational free energy was, for larger values of $v/a^2$, significantly complicated by the fact that the variational free energy clearly had numerous minima with comparable energies. The last density profile should be viewed only as representative.
For comparison, we also performed a $d=2$ Monte-Carlo (MC) simulations on the same system (Fig.\[ch3:d20\]).
![Density profiles obtained by Monte-Carlo simulation for the same molecule and interaction strengths as Fig. \[MCS\]. White space bar: $5a$. ](MC){width="3.0in"}
. \[ch3:d20\]
One of the nodes was pinned to suppress rigid-body Brownian motion of the center of mass. The Kabsch algorithm [@kabsch] was used to compensate for rigid-body rotational Brownian motion. The top left image in Fig.\[ch3:d20\], with $v/a^2 = 0.24$, has a radius of gyration comparable to the theoretical prediction and a weak but noticeable rotational asymmetry. For $v/a^2 = 1.20$, the predicted and computed densities have comparable sizes and both have three maxima. The onset of rotational asymmetry thus appears to be less sharp than predicted by the theory while for $v/a^2 = 2.05$ the theoretical density profile is significantly more detailed than the computed profile. The MC simulations were, in this last case, complicated by long relaxation times.
As an alternative route for a quantitative test of the theory, we compared the moduli $|\Delta {{{\bf{r}}}_0}_{i,j}|$ of the bond extensions predicted by the GBF variational principle with those obtained from the MC simulation. We found that the variational method correctly obtains the bond extensions of the outer monomers while it somewhat overestimates the bond extensions of the inner monomers ([@SupMat Section IV]). Note that the development of significant internal tension provides an *a-posteriori* justification of the use of a self-consistent mean-field theory. The agreement in terms of the bond tensions but not in terms of detailed density profiles indicates that the competing free energy minima may have similar patterns of bond tension. An important extension of the theory would be to allow for the fact that this is a finite system with multiple minima computing to the free energy. This would lead to “smearing" of the rotational symmetry breaking transition.
A natural area where this theory can be applied is that of biopolymers with non-trivial topology that are dominated by repulsive interactions. An increasing number of functional but disordered proteins has been identified. The interactions between the different parts of these proteins are predominantly repulsive (“good solvent”) yet they have distinct, reproducible shapes [@ding; @zipper], as confirmed by Molecular Dynamics simulations [@gapsys]. Though proteins have a linear polymer primary structure, they still can effectively adopt a complex topology due to attractive interactions between specific residues, for instance between cysteine residues that can form disulfide bridges. Another possible area of application involves the shape of large, single-stranded RNA molecules. A graph of the secondary structure of an RNA molecule has a branched topology without circuits, The tertiary structure of an RNA molecule is generated by pairing between non-adjacent nucleic acids that were not paired as part of the secondary structure and these tertiary contacts could be included as bonds in the graph of the molecule, which would produce cycles. Cryo-EM studies of large, swollen single-stranded RNA molecules in good solvents reveal that they are disordered but their density profile has a distinct anisotropy [@gopal2011]. Current methods of DNA origami allow for the construction of molecular structures with prescribed topologies that are reasonably represented by the Edwards Hamiltonian, which allows for direct experimental tests of the proposed theory. A rotational symmetry breaking transition could be engineered by changing the solvent quality. We close by noting that there is a related problem where the method discussed in this paper could be applied namely the computation of the most likely structure of a biopolymer for which it already has been experimentally determined that certain elements of the primary structure are adjacent to each other, for example in the form of a contact map obtained by NMR [@gutin1994]. Though the distance constraints are here *knowledge-based*, instead of physical bonds or links, the Laplacian matrix method still could be used to encode the NMR contact map after which likely density profiles could be computed using the method we outlined here.
RB thanks Alex Levine for useful discussion and acknowledges support from NSF-DMR under Grant 1006128 and from the Simons Foundation. Both A.Y.G. and R.B. wish to acknowledge the Aspen Center for Physics supported by the National Science Foundation (USA) under Grant No. PHY-1066293 where this work was started.
[^1]: We focus here on macromolecules with specific, prescribed structures, which is the case of interest for biomolecules. For a discussion of quenched or annealed averages over a class of structures, see refs. [@Grosberg1; @kantor; @grosberg1995]
[^2]: Analytical expressions for the eigenvalues are available for linear chains, cubic lattices, dendrimers, and a variety of fractal structures [@doi:10.1063/1.4794921; @dolgushev2016extended; @julaiti]. Efficient algorithms are available for the numerical computation of the eigenvalues and eigenvectors.
[^3]: Examples of eigenvalues and eigenvectors of the Laplacian matrix and of the normal modes are given in [@SupMat Section II]
[^4]: For the case of the ring polymer in [@SupMat Section II], the computation of the mean square radius of gyration reproduces the standard result that $R_0^{2}\propto N$. The ${\bf{A}}^{(1)}$ spectral gap mode corresponds to an expansion of the $N$ particles from a point to a ring with radius $\left| {\bf{A}}^{(1)} \right|$.
|
---
abstract: |
Many inverse problems involve two or more sets of variables that represent different physical quantities but are tightly coupled with each other. For example, image super-resolution requires joint estimation of the image and motion parameters from noisy measurements. Exploiting this structure is key for efficiently solving these large-scale optimization problems, which are often ill-conditioned.
In this paper, we present a new method called Linearize And Project (LAP) that offers a flexible framework for solving inverse problems with coupled variables. LAP is most promising for cases when the subproblem corresponding to one of the variables is considerably easier to solve than the other. LAP is based on a Gauss–Newton method, and thus after linearizing the residual, it eliminates one block of variables through projection. Due to the linearization, this block can be chosen freely. Further, LAP supports direct, iterative, and hybrid regularization as well as constraints. Therefore LAP is attractive, e.g., for ill-posed imaging problems. These traits differentiate LAP from common alternatives for this type of problem such as variable projection (VarPro) and block coordinate descent (BCD). Our numerical experiments compare the performance of LAP to BCD and VarPro using three coupled problems whose forward operators are linear with respect to one block and nonlinear for the other set of variables.
author:
- 'James L. Herring[^1]'
- 'James G. Nagy'
- Lars Ruthotto
bibliography:
- '2016-CoupledSuperRes.bib'
title: 'LAP: a Linearize and Project Method for Solving Inverse Problems with Coupled Variables'
---
Nonlinear Least-Squares, Gauss–Newton Method, Inverse Problems, Regularization, Image Processing, Variable Projection
65F10, 65F22, 65M32
Introduction {#sec:intro}
============
We present an efficient Gauss–Newton method called Linearize And Project (LAP) for solving large-scale optimization problems whose variables consist of two or more blocks representing, e.g., different physics. Problems with these characteristics arise, e.g., when jointly reconstructing image and motion parameters from a series of noisy, indirect, and transformed measurements. LAP is motivated by problems in which the blocks of variables are nontrivially coupled with each other, but some of the blocks lead to well-conditioned and easy-to-solve subproblems. As two examples of such problems arising in imaging, we consider super resolution [@ParkEtAl2003; @FarsiuEtAl2004; @ChungEtAl2006] and motion corrected Magnetic Resonance Imaging (MRI) [@BatchelorEtAl2005; @CorderoEtAl2016].
A general approach to solving coupled optimization problems is to use alternating minimization strategies such as Block Coordinate Descent (BCD) [@HardieEtAl1997; @NocedalWright1999]. These straightforward approaches can be applied to most objective functions and constraints and also provide flexibility for using various regularization strategies. However, alternating schemes have been shown to converge slowly for problems with tightly coupled blocks [@NocedalWright1999; @ChungEtAl2006].
One specific class of coupled optimization problems, which has received much attention, is separable nonlinear least-squares problems; see, e.g., [@GolubPereyra1973; @GolubPereyra2003; @OLearyRust2013]. Here, the variables can be partitioned such that the residual function is linear in one block and nonlinear in the other. For brevity, we will refer to the sets of variables as *linear* and *nonlinear block of variables*, respectively. One common method for solving such problems is Variable Projection (VarPro) [@GolubPereyra1973; @GolubPereyra2003; @OLearyRust2013]. The idea is to derive a nonlinear least-squares problem of reduced size by eliminating the the linear block of variables through projections. VarPro is most effective when the projection, which entails solving a linear least-squares problem, can be computed cheaply and accurately. When the number of linear variables in the problem is large, iterative methods can be used to compute the projection [@FohringEtAl2014], but iterative methods can become inefficient when the least-squares problem is ill-posed. Further, standard iterative methods do not provide bounds on the optimality, which are needed to ensure well-defined gradients; see our discussion in Sec. \[sub:varpro\]. We note that recently proposed methods such as in [@EstrinEtAl2017] can provide that information and are, thus, attractive options in these cases. Another limitation of VarPro is that it is not straightforward to incorporate inequality constraints on the linear variables. Some progress has been made for box-constraints using a pseudo-derivative approach [@SimaVanHuffel2007] leading to approximate gradients for the nonlinear problem. Finally, VarPro limits the options for adaptive regularization parameter selection strategies for the least-squares problem.
As an alternative to the methods above, we propose the LAP method for solving coupled optimization problems. Our contributions can be summarized as follows:
- We propose an efficient iterative method called LAP that computes the Gauss–Newton step at each iteration by eliminating one block of variables through projection and solving the reduced problem iteratively. Since projection is performed after linearization, any block can be eliminated. Hence the LAP framework offers superior flexibility compared to existing projection-based approaches, e.g., by supporting various types of regularization strategies and the option to impose constraints for all variables.
- We demonstrate LAP’s flexibility for different regularization strategies including Tikhonov regularization using the discrete gradient operator with a fixed regularization parameter and a hybrid regularization approach [@ChungEtAl2008], which simultaneously computes the search direction and selects an appropriate regularization parameter at each iteration.
- We use projected Gauss–Newton to implement element-wise lower and upper bound constraints with LAP on the optimization variables. This is a distinct advantage over VarPro, where previously proposed methods for incorporating inequality constraints require using approximate gradients.
- We present numerical experiments for several separable nonlinear least-squares problems. The problems are characterized by linear imaging models and nonlinear motion models with applications including 2D and 3D super-resolution and motion correction for magnetic resonance imaging. We compare the performance of LAP with block coordinate descent (BCD) and variable projection (VarPro) by analyzing convergence, CPU timings, number of matrix-vector multiplications, and the solution images and motion parameters.
- We provide our MATLAB implementation of LAP including the examples used in this paper freely at
<https://github.com/herrinj/LAP>
Our paper is organized as follows: Sec. \[sec:discrete\] introduces a general formulation of the motion-corrected imaging problem, which we use to motivate and illustrate LAP, and briefly reviews BCD and VarPro; Sec. \[sec:LAP\] explains our proposed scheme, LAP, for the coupled Gauss–Newton iteration along with a discussion of regularization options and implementation of bound constraints using projected Gauss–Newton; and Sec. \[sec:experiments\] provides experimental results for several examples using LAP and compares it with BCD and VarPro. We end with a brief summary and some concluding remarks.
Motion-Corrected Imaging Problem {#sec:discrete}
================================
In this section, we give a general description of coupled optimization problems arising in image reconstruction from motion affected measurements and briefly review Block Coordinate Descent (BCD) and Variable Projection (VarPro).
We follow the guidelines in [@Modersitzki2009], and consider images as continuously differentiable and compactly supported functions on a domain of interest, $\Omega \subset \mathbb{R}^d$ (typically, $d = 2$ or $3$). We assume that the image attains values in a field $\mathbb{F}$ where $\mathbb{F}=\mathbb{R}$ corresponds to real-valued and $\mathbb{F}=\mathbb{C}$ to complex-valued images. We denote by $\bfx \in \mathbb{F}^n$ a discrete image obtained by evaluating a continuous image at the cell-centers of a rectangular grid with $n$ cells.
The discrete transformation $\bfy\in\R^{d\cdot n}$ is obtained by evaluating a function $y: \Omega \to \R^d$ at the cell-centers and can be visualized as a transformed grid. For the rigid, affine transformations of primary interest in this paper, transformations are comprised of shifts and rotations and can be defined by a small set of parameters denoted by the variable $\bfw$, but in general, the number of parameters defining a transformation may be large. We observe that under a general transformation $\bfy(\bfw)$, the cell-centers of a transformed grid do not align to the cell-centers of the original grid, so to evaluate a discretized image under a transformation $\bfy$, we must interpolate using the known image coefficients $\bfx$. This interpolation can be represented via a sparse matrix $T(\bfy(\bfw)) \in \mathbb{R}^{n \times n}$ determined by the transformation. For the examples in this paper, we use bilinear or trilinear interpolation corresponding to the dimension of the problem, but other alternatives are possible; see, e.g., [@Modersitzki2009] for alternatives. The transformed version of the discrete image $\bfx$ is then expressed as a matrix-vector product $T(\bfy(\bfw)) \bfx$.
Using the above definitions of discrete images and their transformations, we then consider the discrete, forward problem for $N$ distinct data observations, $$\label{eq:fwd}
\bfd_k = K_k T(\bfy(\bfw_k))\ \bfx + \epsilon_k, \quad \text{ for all } \quad k=1,2,\ldots,N,$$ where $\bfd_k \in \mathbb{F}^{\m_k}$ is the measured data, $K_k \in \mathbb{F}^{m_k \times n}$ is a matrix corresponding to the problem-specific image operator, and $\epsilon_k$ is image noise, which we assume to be independently and identically distributed Gaussian noise.
In this paper, we focus on the case in which the motion can be modeled by a small number of parameters. In our case, $\bfw_k \in \R^3$ or $\mathbb{R}^6$ models rigid transformations in 2D and 3D, respectively. The total dimension of the motion parameters across all data measurements is then given by $p = 3N$ or $p=6N$ for $d=2$ and $d=3$, respectively. In the application at hand, we note that $p \ll n$. To simplify our notation, we use the column vectors $\bfd \in \mathbb{F}^{m}$ where $m = m_k \cdot N$ and $\bfw \in \R^p$ to denote the data and motion parameters for all $N$ measurements, respectively.
Given a set of measurements $\{\bfd_1, \bfd_2, \ldots, \bfd_N\}$, the motion-corrected imaging problem consists of jointly estimating the underlying image parameters and the motion parameters in . We formulate this as the coupled optimization problem $$\label{eq:optProb}
\begin{split}
\min_{\bfx \in \mathcal{C}_x, \bfw \in \mathcal{C}_w} & \Phi(\bfx, \bfw) = \hf \| \bfK \bfT(\bfw) \bfx - \bfd \|^2 + \frac{\alpha}{2} \| \bfL \bfx \|_2^2 \text{,}
\end{split}$$
where the matrices $\bfK$ and $\bfT$ have the following block structure $$\bfK = \begin{bmatrix}
K_1 & & & \\
& K_2 & & \\
& & \ddots & \\
& & & K_N \\
\end{bmatrix} \text{\quad and \quad}
\bfT(\bfw) = \begin{bmatrix}
T(\bfy(\bfw_1))\\
T(\bfy(\bfw_2))\\
\vdots \\
T(\bfy(\bfw_N))
\end{bmatrix}.$$
We denote by $\mathcal{C}_x \subset \mathbb{F}^n$ and $\mathcal{C}_w \subset \mathbb{R}^p$ rectangular sets used to impose bound constraints on the image and motion parameters. Lastly, we regularize the problem by adding a chosen regularization operator, $\bfL$ (e.g., a discrete image gradient or the identity matrix) and a regularization parameter, $\alpha>0$, that balances minimizing the data misfit and the regularity of the reconstructed image. We note that finding a good regularization parameter $\alpha$ is a separate, challenging problem which has been widely researched [@ChungEtAl2008; @deSturlerKilmer2011; @HaberOldenburg2000; @Vogel2002]. One strength of LAP is that, for $\bfL = \bfI$, it allows for regularization methods that automatically select the $\alpha$ parameter [@ChungEtAl2008; @GazzolaNagy2014; @GazzolaNovati2014; @GazzolaEtAl2014]. In our numerical experiments, we investigate one such hybrid method for automatic regularization parameter selection [@ChungEtAl2008] as well as direct regularization using a fixed $\alpha$ value.
Problems of the form are often referred to as separable nonlinear least-squares problems. They are a specific class of coupled optimization problems characterized by being nonlinear in one block of variables, $\bfw$, and linear in the other, $\bfx$. Several optimization approaches exist for solving such problems. One option is a fully coupled Gauss–Newton approach to optimize over both sets of variables simultaneously by solving a single linear system. However, this approach does not exploit the convexity of the problem in the image variables, resulting in small Gauss–Newton steps due to the nonlinearity of the motion parameters [@ChungEtAl2006]. Two methods, which have been shown to be preferable to the fully coupled optimization for solving separable nonlinear least-squares problems, are Block Coordinate Descent, which represents a fully decoupled optimization approach, and Variable Projection, which represents a partially coupled, optimization approach. We now provide a brief review of those two methods before introducing LAP.
Block Coordinate Descent (BCD)
------------------------------
BCD represents a fully decoupled approach to solving coupled optimization problems such as ; see, e.g., [@NocedalWright1999]. In BCD, the optimization variables are partitioned into a number of blocks. The method then sequentially optimizes over one block of variables while holding all the others fixed. After one cycle in which all subsets of variables have been optimized, one iteration is completed. The process is then iterated until convergence. For this paper, we separate the variables into the two subsets of variables suggested by the structure of the problem, one for the image variables and another for the motion variables. At the $k$th iteration, we fix $\bfw_k$ and obtain the updated image $\bfx_{k+1}$ by solving $$\label{eq:BCD1}
\bfx_{k+1} = \underset{\bfx \in \mathcal{C}_x}{\argmin} \text{ } \Phi(\bfx, \bfw_k).$$ Afterwards, we fix our new guess for $\bfx_{k+1}$ and optimize over the motion $\bfw$, $$\label{eq:BCD2}
\bfw_{k+1} = \underset{\bfw \in \mathcal{C}_w}{\argmin} \text{ } \Phi(\bfx_{k+1}, \bfw).$$ These two steps constitute a single iteration of the method. We note that BCD is decoupled in the sense that while optimizing over one set of variables, we neglect optimization over the other. This degrades convergence for tightly coupled problems [@NocedalWright1999]. However, BCD has many advantages. It is applicable to general coupled problems including ones that are nonlinear in all blocks of variables. Also, it allows for straightforward implementation of bound constraints and supports various types of regularization. For our numerical experiments, we solve the BCD imaging problem inexactly using a single step of projected Gauss–Newton with bound constraints and various regularizers, which we introduce in Section \[sec:Reg\]. The optimization problem in the second step is small-dimensional and separable, and we perform a single step of Gauss–Newton with a direct solver to compute the search direction.
Variable Projection (VarPro) {#sub:varpro}
----------------------------
VarPro is frequently used to solve separable nonlinear least-squares problems such as the one in ; see, e.g., [@GolubPereyra2003; @OLearyRust2013]. The key idea in VarPro is to eliminate the linear variables (here, the image) by projecting the problem onto a reduced subspace associated with the nonlinear variables (here, the motion) and then solving the resulting reduced, nonlinear optimization problem. In our problem , eliminating the image variables requires solving a linear least-squares problem involving the matrix $T(\bfw)$ that depends on the current motion parameters. We express the projection by $$\label{eq:VarPro1}
\begin{split}
\bfx(\bfw) &= \underset{\bfx \in \mathcal{C}_x}{\argmin}\quad \Phi(\bfx, \bfw).
\end{split}$$ Substituting this expression in for $\bfx$, we then obtain a reduced dimensional problem in terms of the nonlinear variable $\bfw$, $$\label{eq:VarPro2}
\begin{split}
& \min_{\bfw \in \mathcal{C}_w} \Phi(\bfx(\bfw), \bfw).
\end{split}$$ The reduced problem is solved to recover the motion parameters, noting that by solving at each iteration, we simultaneously recover iterates for the image. Assuming $\mathcal{C}_w = \mathbb{R}^p$ (unconstrained case), we see that the first-order necessary optimality condition in is $$\label{eq:VarProOpt}
0 = \nabla_\bfw \Phi(\bfx(\bfw),\bfw) + \nabla_\bfw \bfx(\bfw) \nabla_\bfx \Phi(\bfx(\bfw), \bfw).$$ Note that in the absence of constraints on $\bfx$ (i.e., $\mathcal{C}_x = \mathbb{F}^n$) the second term on the right hand side of vanishes due to the first-order optimality condition of . However, this is not necessarily the case when $\mathcal{C}_x \neq \mathbb{F}^n$ or when is solved with low accuracy. In those cases $\nabla_\bfx \Phi(\bfx(\bfw),\bfw)$ does not equal $0$ and computing $\nabla_\bfw \bfx(\bfw)$, which can be as hard as solving the original optimization problem , is inevitable. In such situations, neglecting the second term in may considerably degrade the performance of VarPro.
Linearize and Project (LAP) {#sec:LAP}
===========================
We now introduce the LAP method for solving the coupled optimization problem . We begin by linearizing the residual in following a standard Gauss–Newton framework. In each iteration, computing the search direction then requires solving a linear system that couples the image and motion parameters. We do this by projecting the coupled problem onto one block of variables. This offers flexibility in terms of regularization and can handle bound constraints on both the motion and image variables. Furthermore, it also allows the user to freely choose which block of variables is eliminated via projection.
We introduce our approach to solving the linear coupled problem for the Gauss–Newton step by breaking the discussion into several subsections. We start with a subsection introducing our strategy of projection onto the image space for an unconstrained problem where $\mathcal{C}_x = \mathbb{F}^n$ and $\mathcal{C}_w = \mathbb{R}^p$. This is followed by a subsection on the various options for image regularization that our projection approach offers. Lastly, we extend LAP to a projected Gauss–Newton framework to allow for element-wise bound constraints on the solution, i.e., when $\mathcal{C}_x$ and $\mathcal{C}_w$ are proper subsets of $\mathbb{F}^n$ and $\mathbb{R}^p$, respectively.
Linearizing the Problem {#sec:Lin}
-----------------------
We begin considering a Gauss–Newton framework to solve the coupled problem in the unconstrained case, i.e., $\mathcal{C}_x = \mathbb{F}^n$ and $\mathcal{C}_w = \mathbb{R}^p$. To solve for the Gauss–Newton step at each iteration, we first reformulate the problem by linearizing the residual $\bfr(\bfx,\bfw) = \bfK \bfT(\bfw) \bfx - \bfd$ around the current iterate, $(\bfx_0,\bfw_0)$. Denoting this residual as $\bfr_0 = \bfr(\bfx_0, \bfw_0)$, we can write its first-order Taylor approximation as $$\label{eq:linRes}
\bfr (\bfx_0 + \delta\bfx, \bfw_0 + \delta \bfw) \approx \bfr_0 + \begin{bmatrix}
\bfJ_x & \bfJ_w
\end{bmatrix} \begin{bmatrix}
\delta \bfx \\ \delta \bfw
\end{bmatrix},$$ where $\bfJ_x = \nabla_x \bfr_0^\top$ and $ \bfJ_w = \nabla_w \bfr_0^\top$ are the Jacobian operators with respect to the image and motion parameters, respectively. They can be expressed as $$\begin{aligned}
\bfJ_x &= \bfT^\top \bfK^\top, \quad \text{ and }\quad \bfJ_w = {\rm diag}(\nabla_{\bfw_1}\left(T(\bfy(\bfw_1))\bfx\right), \ldots,\nabla_{\bfw_N}\left(T(\bfy(\bfw_N))\bfx\right) )^\top \bfK^\top,
\end{aligned}$$ where each term $\nabla_{\bfw_k}\left( T(\bfy(\bfw_k)) \bfx \right)$ is the gradient of the transformed image at the transformation $\bfy(\bfw_k)$; see [@ChungEtAl2006] for a detailed derivation. Both $\bfJ_x \in \mathbb{F}^{m \times n} $ and $\bfJ_w \in \mathbb{F}^{m \times p}$ are sparse and we recall that in the application at hand $p \ll n$.
After linearizing the residual around the current iterate, we substitute the approximation for the residual term in to get $$\label{eq:optProb2}
\begin{split}
\min_{\delta \bfx, \delta \bfw} & \hat{\Phi}(\delta \bfx,\delta \bfw) = \frac{1}{2} \left\lVert \bfJ_x \delta \bfx + \bfJ_w \delta \bfw + \bfr_0 \right\rVert^2 + \frac{\alpha}{2} \|\bfL(\bfx_0 + \delta \bfx) \|^2 \\
\end{split}$$ By solving this problem we obtain the updates for the image and motion parameters, denoted by $\delta \bfx$ and $\delta \bfw$, respectively. As is based on a linearization it is common practice to solve it only to a low accuracy; see, e.g., [@NocedalWright1999]. Note that solving directly using an iterative method equates to the fully coupled Gauss–Newton approach mentioned in Sec. \[sec:discrete\], which has been observed to converge slowly during optimization. This motivates LAP’s projection-based strategy to solving the linearized problem for the Gauss–Newton step, which we now present.
Projecting the problem onto the image space {#sec:Proj}
-------------------------------------------
Recall that VarPro is restricted to projecting onto the nonlinear variables (in our case, this would be the motion parameters, which requires us to solve one large-scale image reconstruction problem per iteration). With our framework, there is no such restriction; the coupled linear problem can be projected onto either set of variables. Because there are a small number of nonlinear variables, to solve the coupled linear problem in , we propose projecting the problem onto the image space and solving for $\delta \bfx$. To project, we first note that the first-order optimality condition for the linearized problem with respect to $\delta \bfw$ is $$0 = \nabla_{\delta \bfw} \big( \bfJ_x \delta \bfx + \bfJ_w \delta \bfw + \bfr_0 \big)^\top \big( \bfJ_x \delta \bfx + \bfJ_w \delta \bfw + \bfr_0 \big)$$ or equivalently, $$0 = 2 \big( \bfJ_w^{\top} \bfJ_w \delta \bfw + \bfJ_w^{\top} \bfJ_x \delta \bfx + \bfJ_w^{\top} \bfr_0 \big).$$ Solving this condition for $\delta \bfw$, we get $$\label{eq:deltaw}
\delta \bfw = -\big(\bfJ_w^\top \bfJ_w \big)^{-1} \big( \bfJ_w^\top \bfJ_x \delta \bfx + \bfJ_w^\top \bfr_0 \big).$$ We can then substitute this expression for $\delta \bfw$ into and group terms. This projects the problem onto the image space and gives a new problem in terms of $\delta \bfx$, the Jacobians $\bfJ_x$ and $\bfJ_w$, and the residual $\bfr_0$ given by $$\min_{\delta \bfx} \hf \left\lVert \left( \bfI - \bfJ_w \big(\bfJ_w^\top \bfJ_w \big)^{-1} \bfJ_w^\top \right) \bfJ_x \delta \bfx + \left(\bfI - \bfJ_w \big(\bfJ_w^\top \bfJ_w \big)^{-1} \bfJ_w^\top \right)\bfr_0 \right\rVert^2 + \frac{\alpha}{2} \|\bfL (\bfx_0 + \delta \bfx) \|^2$$ or more succinctly, $$\label{eq:probX}
\min_{\delta \bfx} \hf \left\lVert \bfP_{\bfJ_w}^\perp \big( \bfJ_x \delta \bfx + \bfr_0\big) \right\rVert^2 + \frac{\alpha}{2} \|\bfL (\bfx_0 + \delta \bfx) \|^2$$
where $\bfP_{\bfJ_w}^\perp = \bfI - \bfJ_w (\bfJ_w^\top \bfJ_w)^{-1} \bfJ_w^\top$ is a projection onto the orthogonal complement of the column space of $\bfJ_w$. This least-squares problem can be solved using an iterative method with an appropriate right preconditioner [@Bjorck1996; @HestenesStiefel1952; @Saad2003]. In particular, we observe that $$\bfP_{\bfJ_w}^\perp \bfJ_x = \bfJ_x - \bfJ_w \big(\bfJ_w^\top \bfJ_w \big)^{-1} \bfJ_w^\top \bfJ_x$$ is a low rank perturbation of the operator $\bfJ_x$ since ${\rm rank}(\bfP_{\bfJ_w}^\perp) \leq p \ll n$. Hence, a good problem-specific preconditioner for $\bfJ_x$ should be a suitable preconditioner for the projected operator.
It is important to emphasize that in our approach, the matrix $\bfJ_w^\top \bfJ_w \in \mathbb{R}^{p \times p}$ is moderately sized and symmetric positive-definite if $\bfJ_w$ is full rank. Therefore, it is computationally efficient to compute its Cholesky factors once per outer Gauss–Newton iteration and reuse them to invert the matrix when needed. Furthermore, one can use a thin QR factorization of $\bfJ_w$ to compute the Cholesky factors and avoid forming $\bfJ_w^\top \bfJ_w$ explicitly. We use this strategy to increase the efficiency of iteratively solving when using matrix-free implementations, and in practice have not seen rank-deficiency. After solving the preconditioned projected problem for $\delta \bfx$, one obtains the accompanying motion step $\delta \bfw$ via .
Regularization {#sec:Reg}
--------------
After linearizing and projecting onto the space of the linear, image variables, we solve the regularized least-squares problem . For our applications, this problem is high-dimensional, and we use an iterative method to approximately solve it with low accuracy. Also, this problem is ill-posed for our numerical tests and thus requires regularization. Here we discuss the types of direct, iterative, and hybrid methods for regularization that can be used in LAP.
We first note that is a Tikhonov least-squares problem. Problems of this form are well-studied in the literature; see, e.g., [@Hansen1998; @EnglEtAl2000; @Vogel2002; @Hansen2010]. The quality of an iterative solution for this type of problem depends on the selection of an appropriate regularization parameter $\alpha$ and a regularization operator $\bfL$. Common choices for $\bfL$ include the discretized gradient operator $\bfL = \nabla_h$ using forward differences and the identity $\bfL = \bfI$. Additionally, problems including non-quadratic regularizers, e.g., total variation [@RUDIN:1992kn] or $p$-norm based regularizers can be addressed by solving a sequence of least-squares problems that include weighted $\ell_2$ regularizers [@RodrWohl2006; @RodrWohl2008; @RodrWohl2007], or more efficiently via a hybrid Krylov subspace approach [@GazzolaNagy2014]. Efficient solvers for optimization problems with quadratic regularizers are also a key ingredient of splitting-based methods, e.g., the Split Bregman method for $\ell_1$ regularized problems [@GoldsteinOsher2009]. Thus, while we restrict ourselves to quadratic regularization terms for the scope of this paper, LAP is suitable for a broader class or regularization options. For more information on regularization parameter selection, see [@Hansen1998; @Vogel2002].
Another approach is iterative regularization [@EnglEtAl2000; @Hansen1998], which aims to stop an iterative method at the iteration that minimizes the reconstruction error. It exploits the fact that the early iterations of iterative methods like Landweber and Krylov subspace methods contain the most important information about the solution and are less affected by noise in the data. In contrast, later iterations of these methods contain less information about the solution and are more noise-affected. This results in a reduction of the reconstruction error for the computed solution during the early iterations of the iterative method followed by its increase in later iterations, a phenomenon known as semi-convergence. However, in practice iterative regularization can be difficult as the reconstruction error at each iterate is unknown and the quality of the solution is sensitive to an accurate choice of stopping iterate.
Hybrid regularization methods represent further alternatives that seek to combine the advantages of direct and iterative regularization, see e.g., [@ChungEtAl2008; @GazzolaNagy2014; @GazzolaNovati2014; @GazzolaEtAl2014] and the references therein. Hybrid methods use Tikhonov regularization with a new regularization parameter $\alpha_k$ at each step of an iterative Krylov subspace method such as LSQR [@PaigeSaunders1982]. Direct regularization for the early iterates of a Krylov subspace method is possible due to the small dimensionality of the Krylov subspace. This makes SVD-based parameter selection methods for choosing $\alpha_k$ computationally feasible at each iteration. The variability of $\alpha_k$ at each iteration helps to stabilize the semi-convergence behavior of the iterative regularization from the Krylov subspace method, making it less sensitive to the choice of stopping iteration. Thus, hybrid methods combine the advantages of both direct and iterative regularization while avoiding the cost of SVD-based parameter selection for the full-dimensional problem and alleviating the sensitivity to stopping criteria which complicates iterative regularization.
Each of the methods in this paper necessitates solving a regularized system in the image variable given by for LAP, for VarPro, and for BCD. For these problems, we use both direct and hybrid regularization when possible to demonstrate the flexibility of the LAP approach. We begin by running LAP with the discrete gradient regularizer, $\bfL = \nabla_h$ and a fixed $\alpha$. This approach is also feasible within the VarPro framework and is straightforward to implement using BCD. To test hybrid regularization, we use the `HyBR` method [@ChungEtAl2008] (via the interface given in the IR Tools package [@IRToolsv1]) for LAP and BCD to automatically choose $\alpha$ using a weighted GCV method. We note that because such hybrid regularization methods are tailored to linear inverse problems, they cannot be used to directly solve the nonlinear VarPro optimization problem .
Optimization {#sec:Optim}
------------
In Sec. \[sec:Lin\] and \[sec:Proj\], we introduced LAP for for the case of an unconstrained problem. In practice for imaging problems such as , bounds on the image and/or motion variables are often known, in which case imposing such prior knowledge into the inverse problem is desirable. This section details how the LAP strategy can be coupled with projected Gauss–Newton to impose element-wise bound constraints on the solutions for $\bfx$ and $\bfw$. We introduce projected Gauss–Newton following the description in [@Haber2014]. The method represents a compromise between a full Gauss–Newton, which converges quickly when applied to the full problem and projected gradient descent, which allows for the straightforward implementation of bound constraints. For projected Gauss–Newton, we separate the step updates $\delta \bfx$ and $\delta \bfw$ into two sets: the set of variables for which the bound constraints are inactive (the inactive set) and the set of variables for which the bound constraints are active (the active set). We denote these subsets for the image by $\delta \bfx_{\mathcal{I}} \subset \delta \bfx$ and $\delta \bfx_{\mathcal{A}} \subset \delta \bfx$ where the subscript $\mathcal{I}$ and $\mathcal{A}$ denote the inactive and active sets, respectively. Identical notation is used for the inactive and active sets for the motion.
On the inactive set, we take the standard Gauss–Newton step at each iteration using the LAP strategy described in Sec. \[sec:Proj\]. This step is computed by solving restricted to the inactive set. Thus, becomes $$\label{eq:inactProbX}
\begin{split}
&\min_{\delta \bfx} \hf \left\lVert \hat{\bfP}_{\bfJ_w}^\perp \big( \hat{\bfJ}_x \delta \bfx_{\mathcal{I}} + \bfr_0\big) \right\rVert^2 + \frac{\alpha}{2} \|\hat{\bfL} (\bfx_{0,\mathcal{I}} + \delta \bfx_{\mathcal{I}}) \|^2,
\end{split}$$ where $\hat{\bfJ}_x$, $\hat{\bfJ}_w$, $\hat{\bfP}_{\bfJ_w}^\perp$, and $\hat{\bfL}$ are $\bfJ_x$, $\bfJ_w$, $\bfP_{\bfJ_w}$, and $\bfL$ restricted to the inactive set via projection. We then obtain the corresponding motion step on the inactive step by $$\label{eq:inactDeltaW}
\delta \bfw_{\mathcal{I}} = -\big(\hat{\bfJ}_w^\top \hat{\bfJ}_w \big)^{-1} \big( \hat{\bfJ}_w^\top \hat{\bfJ}_x \delta \bfx_{\mathcal{I}} + \hat{\bfJ}_w^\top \bfr_0 \big).$$ This equation is analogous to for the unconstrained problem. Note that LAP’s projection strategy is only used on the inactive set. Thus, the constraints do not affect the optimality condition for the projection to eliminate the $\delta \bfw_{\mathcal{I}}$ block of variables. The projected least-squares problem is also unaffected. Also, for the special case when the upper and lower bounds on all variables are $-\infty$ and $\infty$ and all variables belong to the inactive set at each iteration, the method reduces to standard Gauss–Newton as presented in Sec. \[sec:Proj\].
For the active set, we perform a scaled, projected gradient descent step given by $$\label{eq:projGradDes}
\begin{bmatrix}
\delta \bfx_{\mathcal{A}} \\
\delta \bfw_{\mathcal{A}}
\end{bmatrix} =
-\begin{bmatrix}
\tilde{\bfJ}_x^\top \bfr_0 \\
\tilde{\bfJ}_w^\top \bfr_0
\end{bmatrix}
- \alpha \begin{bmatrix}
\tilde{\bfL}^\top \tilde{\bfL} (\bfx_{0,\mathcal{A}} + \delta \bfx_{\mathcal{A}}) \\
{\bf 0}
\end{bmatrix}.$$ where again $\tilde{\bfJ}_x$, $\tilde{\bfJ}_w$, and $\tilde{\bfL}$ represent the projection of $\bfJ_x$, $\bfJ_w$, and $\bfL$ onto the active set. We note that the regularization parameter $\alpha$ should be consistent for both and . When using direct regularization with a fixed $\alpha$, this is obvious. However, for the hybrid regularization approach discussed in Sec. \[sec:Reg\], a choice is required. We set $\alpha$ on the active set at each iteration to be the same as the $\alpha$ adaptively chosen by the hybrid regularization on the inactive set at the same iterate.
The full step for a projected Gauss–Newton iteration is then given by a scaled combination of steps on the inactive and active sets $$\label{eq:projGN}
\begin{bmatrix}
\delta \bfx \\
\delta \bfw
\end{bmatrix}
= \begin{bmatrix}
\delta \bfx_{\mathcal{I}} \\
\delta \bfw_{\mathcal{I}}
\end{bmatrix}
+ \gamma \begin{bmatrix}
\delta \bfx_{\mathcal{A}} \\
\delta \bfw_{\mathcal{A}}
\end{bmatrix}.$$ Here, the parameter $\gamma > 0$ is a weighting parameter to reconcile the difference in scales between the Gauss–Newton and gradient descent steps. To select this parameter, we follow the recommendation of [@Haber2014] and use $$\label{eq:gamma}
\gamma = \frac{\max \left( \| \delta \bfx_{\mathcal{I}} \|_{\infty}, \| \delta \bfw_{\mathcal{I}} \|_{\infty} \right)}{\max \left( \| \delta \bfx_{\mathcal{A}} \|_{\infty}, \| \delta \bfw_{\mathcal{A}} \|_{\infty} \right)}.$$ This choice of $\gamma$ ensures that the projected gradient descent step taken on the active set is no larger than the Gauss–Newton step taken on the inactive set, and we have used it with no ill effects in practice.
After combining the steps for both the inactive and active sets using , we use a projected Armijo line search for the combined step to ensure the next iterate obeys the problem’s constraints. A standard Armijo line search chooses a step size $0 < \eta \leq 1$ by backtracking from the full Gauss–Newton step ($\eta = 1$) to ensure a reduction of the objective function [@NocedalWright1999]. The projected Armijo line search satisfies a modified Armijo condition given by
$$\label{eq:projArmijo}
\Phi \big( \bP_{\mathcal{C}_\bfx}(\bfx + \eta \delta \bfx), \bP_{\mathcal{C}_\bfw}(\bfw + \eta \delta \bfw)\big) \leq \Phi(\bfx, \bfw) + c \eta \bfQ \big( \nabla \Phi(\bfx,\bfw)\big)^\top \begin{bmatrix} \delta \bfx \\ \delta \bfw \end{bmatrix} .$$
Here, $\bP_{\mathcal{C}_\bfx}$ and $\bP_{\mathcal{C}_\bfw}$ are projections onto the feasible set for the image and motion variables, respectively, and $\bfQ \big( \nabla \Phi(\bfx, \bfw)\big)$ is the projected gradient. The constant $c \in (0,1)$ determines the necessary reduction for the line search; we set $c = \num{e-4}$ as suggested in [@NocedalWright1999]. Under the projections $\bP_{\mathcal{C}_\bfx}$ and $\bP_{\mathcal{C}_\bfw}$, variables that would leave the feasible region are projected onto the boundary and join the active set for the next iteration. However, the projection does not prevent variables from leaving the active set and joining the inactive set. This necessitates updating the inactive and active sets after the line search at each next iteration. Also note that for the special, unconstrained case, the line search reverts to the standard Armijo line search with no need for projection. However, for this case, the problem reverts to the standard Gauss–Newton framework which should remove the necessity for a line search in most cases.
Lastly, we discuss the choice of stopping criteria for the projected Gauss–Newton method, which again depends on the type of regularization used. For a fixed regularization parameter $\alpha$, we monitor the relative change of the objective function and the norm of the projected gradient including the regularizer term. The projected gradient is the first-order optimality condition of the constrained problem in and can be computed via a projection [@Beck2014]. The projection is necessary because the bound constraints prevent gradient entries corresponding to variables with minima outside the feasible region from converging to zero. When using hybrid regularization, we must consider the variability of $\alpha_k$ at each Gauss–Newton iteration. Selecting a different $\alpha_k$ parameter at each iteration changes the weight of the regularization term in the objective function and its projected gradient at each iteration. This makes the full objective function value and projected gradient for these methods unreliable stopping criteria. Instead, we monitor both the norm of the difference between the current iterate and the previous iterate and the difference between the previous and current objective function values for the data misfit term of the objective function. We stop the Gauss–Newton method when either of those values drops below a certain threshold, indicating a stagnation of the method. This allows us to monitor the behavior of the Gauss–Newton iteration without being subject to the fluctuations associated with the varying $\alpha_k$.
Summary of the Method
---------------------
We summarize the discussion of LAP by presenting the entire projected Gauss–Newton algorithm using the LAP framework. This provides a framework to efficiently solve the coupled imaging problems of interest while including the flexible options for regularization and bound constraints which helped motivate our approach. The complete algorithm is given in Algorithm \[LAPalgorithm\].
Given $\bfx_0$ and $\bfw_0$ Compute active and inactive sets, $\mathcal{A}$ and $\mathcal{I}$ Evaluate $\Phi(\bfx_0,\bfw_0)$, $\bfr_0$, $\bfJ_x^{(0)}$ and $\bfJ_w^{(0)}$ Compute the step the inactive set with LAP using and Compute the step on the active set using projected gradient descent Combine the steps using and Perform the projected Armijo line search satisfying , update $\bfx_k, \bfw_k$ Update active and inactive sets, $\mathcal{A}$ and $\mathcal{I}$ Evaluate $\Phi(\bfx_k, \bfw_k)$, $\bfr_k$, and $\bfJ_x^{(k)}$ and $\bfJ_w^{(k)}$
Numerical Experiments {#sec:experiments}
=====================
We now test LAP for three coupled imaging problems. These are a two-dimensional super-resolution problem, a three-dimensional super-resolution problem, and a linear MRI motion correction problem. For all examples, we compare the results using LAP to those of VarPro and BCD. We look at the quality of the resultant image, the ability of the methods to correctly determine the motion parameters across all data frames, the number of iterations required during optimization, the cost of optimization in terms matrix-vector multiplications, and the CPU time to reach a solution. To compare the quality of the resultant image and motion, we use relative errors with respect to the known, true solutions. The matrix-vector multiplications of interest are those by the Jacobian operator associated with the linear, imaging variable $\bfJ_x$. Matrix-vector multiplications by this operator are required in the linear least-squares systems for all three methods and are the most computationally expensive operation within the optimization.
Two-Dimensional Super Resolution {#sub:two_dimensional_example}
--------------------------------
Next, we run numerical experiments using a relatively small two-dimensional super resolution problem. To construct a super resolution problem with known ground truth image and motion parameters, we use the 2D MRI dataset provided in FAIR [@Modersitzki2009] (original resolution $128\times 128$) to generate 32 frames of low-resolution test data (resolution $32\times 32$) after applying 2D rigid body transformations with randomly chosen parameters. Gaussian white noise is added using the formula $$\begin{array}{lll}
\bfd_k = \bar{\bfd}_k + \epsilon_k & \text{ where } & \epsilon_k = \mu \frac{\|\bar{\bfd}_k \|_2}{\| \bfn_k \|_2} \bfn_k
\end{array}$$ where $\bar{\bfd_k}$ denotes a noise free low-resolution data frame, $\mu$ is the percentage of noise and $\bfn_k$ is a vector of normally distributed random values with mean $0$ and standard deviation $1$. Our experiments show results with $\mu=1$%, $2$%, and $3$% noise added to the low-resolution data frames. The resulting super-resolution problem is an optimization problem of the form . Here, the imaging operator $\bfK$ is a block diagonal matrix composed of $32$ identical down-sampling matrices $K = K_k$ along the diagonal, which relate the high-resolution image to the lower resolution one via block averaging. The total number of parameters in the optimization is $16,528$ corresponding to $16,384$ for the image and $96$ for the motion.
As mentioned in [@ChungEtAl2006], the choice of initial guess is crucial in super-resolution problems, so we perform rigid registration of the low-resolution data onto the first volume (resulting in 31 rigid registration problems) to obtain a starting guess for the motion parameters. The resulting relative error of the parameters is around $2$%. Using these parameters, we solve the linear image reconstruction problem to obtain a starting guess for the image with around $5$% relative error.
We then solve the super-resolution problem using LAP, VarPro, and BCD. For all three approaches, we compare two regularization strategies. All three methods are run using the gradient operator, $\bfL = \nabla_h$ with a fixed regularization parameter $\alpha = 0.01$. LAP and BCD are then run with the Golub-Kahan hybrid regularization approach detailed in Section \[sec:Reg\] (denoted as `HyBR` in tables and figures). As previously mentioned, `HyBR` cannot be applied directly to the VarPro optimization problem , so for VarPro we use a fixed $\alpha = 0.01$ and the identity as our second regularizer, $\bfL = \bfI$. To allow for comparison, we use the same regularization parameter as in [@ChungEtAl2006], which may not be optimal for any of the methods presented. For more rigorous selection criteria, we refer the reader to the methods mentioned in Sec. \[sec:Reg\]. For LAP and BCD, we add element-wise bound constraints on the image space in the range $[0,1]$ for both choices of regularizer, with both bounds active in practice. The number of active bounds varies for different noise levels and realizations of the problem but can include as many as $30$ – $35$% of the image variables for both LAP and BCD. VarPro is run without bound constraints. Neither constraints nor regularization are imposed on the motion parameters for the three methods.
All three methods require solving two different types of linear systems, one associated with the image variables and another with the motion variables. LAP solves these systems to determine the Gauss–Newton step. We use LSQR [@PaigeSaunders1982] with a stopping tolerance of to solve for both regularization approaches. The motion step is computed using the Cholesky factors of $\bfJ_w^\top \bfJ_w$. VarPro requires solving the linear system within each function call. For both choices of regularization, we solved this system by running LSQR for a fixed number of $20$ iterations. Recall that this system must be solved to a higher accuracy than the similarly sized systems in LAP and BCD to maintain the required accuracy in the gradient; see Sec. \[sub:varpro\]. Gauss–Newton on the reduced dimensional VarPro function then requires solving the reduced linear system in the motion parameters, which we solve using Cholesky factorization on the normal equations. For BCD, coordinate descent requires alternating solutions for the linear system in the image, , and a nonlinear system in the motion parameters, . For both of these, we take a single projected Gauss–Newton step. For the image, this is solved using LSQR with a stopping tolerance of using `MATLAB`’s `lsqr` function for direct regularization and `HyBR`’s LSQR for hybrid regularization. LSQR with this tolerance is also used for the corresponding problems in the 3D super-resolution and MRI motion correction examples in Sec. \[sub:three\_dimensional\_example\] and \[sub:MRI\_motion\_example\]. For the motion, we use Cholesky factorization on the normal equations. During these solves, we track computational costs for the number of matrix-vector multiplications by the Jacobian operator associated with the image, $\bfJ_x$. For LAP and BCD, these multiplications are required when solving the system for the Gauss–Newton step for the image step, while for VarPro, they are necessary for the least-squares solve within the objective function.
We compare results for the methods using the relative errors of the resultant image and motion parameters. We separate relative errors for the image and motion. Plots of these errors against iteration can be seen in Fig. \[fig:2D\_SuperRes\_RE\] for the problem with $2$% added noise. The corresponding resulting image for LAP for the $2$% error case can be seen in Fig. \[fig:2D\_SuperRes\_Images\]. Lastly, a table of relevant values including the average number of iterations, minimum errors for image and motion, matrix-vector multiplications, and CPU timings for the methods taken over $10$ different realizations of the problem for all three noise levels is in Table \[tab:2D\_SuperRes\_Table\].
For direct regularization using the discrete gradient operator, the solutions for all three methods are comparable in terms of the relative error for the motion, with LAP and BCD slightly outperforming VarPro for the relative error of the recovered images. This is a direct result of the element-wise bound constraints on the resultant images using these methods. Furthermore, these solutions are superior to those for all three methods using hybrid regularization or the identity operator, suggesting that this is a more appropriate regularizer for this problem. LAP with the discrete gradient operator recovers the most accurate reconstructed image of the three methods and achieves better or comparable recovery of the motion parameters. This is observable in the relative error plots for the $2$% added noise case in Fig. \[fig:2D\_SuperRes\_RE\], and for the problem with all three noise levels in Table \[tab:2D\_SuperRes\_Table\]. We can also see from the relative error plots that LAP tends to recover the correct motion parameters earlier in the Gauss–Newton iterations than either BCD or VarPro. In terms of method cost, both the LAP and BCD iterations cost significantly less in terms of time and matrix-vector multiplications than those of VarPro, resulting in faster CPU times and fewer matrix-vector multiplies for the entire optimization. However, while BCD is also relatively cheap in terms of matrix-vector multiplies and CPU time, LAP outperforms it in terms of solution quality. Overall, the LAP approach compares favorably to VarPro and BCD for this example in terms of both the resulting solutions and cost, outperforming both methods.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The images from left to right show a montage of data frames, the initial guess $\bfx_{initial}$, and the reconstructed image $\bfx_{LAP}$ for LAP using the discrete gradient regularizer for the 2D super resolution problem with 2% noise.[]{data-label="fig:2D_SuperRes_Images"}](2D_SuperRes_Pics_d.pdf "fig:"){width=".95\textwidth"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![These plots show the relative errors for both the reconstructed image and the motion parameters for the 2D super resolution problem with $2$% added noise. We note that methods using the discrete gradient regularizer reach lower minima for the image in this problem, and LAP with the discrete gradient regularizer outperforms both VarPro and BCD in recovering the motion parameters in the early iterations of the method for all noise levels tested.[]{data-label="fig:2D_SuperRes_RE"}](2D_SuperRes_Plots_b.pdf){width="\textwidth"}
-- --------------------- -------------- ------------------ ------------------ -------------- --------------
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` [**11.1**]{} 5.34e-2 1.82e-2 93.4 [**14.3**]{}
LAP + $\nabla_h$ 13.1 [**3.76e-2**]{} [**1.78e-2**]{} [**62.2**]{} 15.4
VarPro + $\bfI$ 25.7 8.60e-2 1.84e-2 554.0 60.1
VarPro + $\nabla_h$ 21.1 4.12e-2 1.81e-2 462.0 56.0
BCD + `HyBR` 12.0 5.93e-2 2.03e-2 68.1 22.9
BCD + $\nabla_h$ 29.7 3.95e-2 1.79e-2 89.6 55.4
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` [**12.0**]{} 6.38e-2 1.86e-2 77.0 [**12.5**]{}
LAP + $\nabla_h$ 15.9 [**4.38e-2**]{} [**1.81e-2**]{} 66.4 15.9
VarPro + $\bfI$ 29.6 1.03e-1 1.86e-2 632.0 64.7
VarPro + $\nabla_h$ 21.4 5.05e-2 1.85e-2 468.0 54.9
BCD + `HyBR` 13.4 7.10e-2 2.15e-2 [**53.1**]{} 25.0
BCD + $\nabla_h$ 29.9 4.55e-2 1.82e-2 91.7 57.0
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` [**12.3**]{} 7.54e-2 2.28e-2 68.6 [**16.3**]{}
LAP + $\nabla_h$ 18.5 [**5.34e-2**]{} 2.22e-2 74.2 22.0
VarPro + $\bfI$ 34.4 1.27e-1 2.32e-2 728.0 86.7
VarPro + $\nabla_h$ 23.9 6.17e-2 [**2.13e-2**]{} 518.0 72.5
BCD + `HyBR` 15.0 7.86e-2 2.32e-2 [**52.9**]{} 31.9
BCD + $\nabla_h$ 29.7 5.42e-2 2.24e-2 83.2 63.9
-- --------------------- -------------- ------------------ ------------------ -------------- --------------
: This table shows data for the 2D super-resolution for multiple values of added Gaussian noise. The columns from left to right give the stopping iteration, the relative error of the solution image, the relative error of the solution motion, number of matrix-vector multiplications during optimization, and time in seconds using `tic` and `toc` in `MATLAB`. All values are averages taken from $10$ instances with different motion parameters, initial guesses, and noise realizations. The best results for each column is bold-faced.[]{data-label="tab:2D_SuperRes_Table"}
Three-Dimensional Super Resolution {#sub:three_dimensional_example}
----------------------------------
The next problem is a larger three-dimensional super-resolution problem. Again, we use a 3D MRI dataset provided in `FAIR` [@Modersitzki2009] to construct a super-resolution problem with a known ground truth image and motion parameters. The ground truth image (resolution $160 \times 96 \times 144$) is used to generate 128 frames of low-resolution test data (resolution $40 \times 24 \times 32$). Each frame of data is shifted and rotated by a random 3D rigid body transformation, after which it is downsampled using block averaging. Lastly, Gaussian white noise is added to each low-resolution data frame in the same way as for the two-dimensional super-resolution problem, and we run the problem for $\mu = 1$%, $2$%, and $3$% added noise per data frame. The resulting optimization problem has $2,212,608$ unknowns, $2,211,840$ for the image and $768$ for the motion parameters. The data has dimension $5,898,240$. The formulation of the problem is identical to that of the two-dimensional super-resolution problem with appropriate corrections for the change in dimension. The imaging operator $\bfK$ in the three-dimensional example is block diagonal with $128$ identical down-sampling matrices $K$ which relate the high-resolution image to the low-resolution data by block averaging.
The initial guess for the three-dimensional problem is generated using the same strategy as in the two-dimensional case. To generate a guess for the motion parameters, we register all frames onto the first frame (thus solving $127$ rigid registration problems.) This gives an initial guess for the motion with approximately $3$% relative error. Using this initial guess, we then solve a linear least-squares problem to obtain an initial guess for the image with a relative error of around $13$%. We note that this is a poorer initial guess when compared to the true solution, than the one obtained for the two-dimensional super-resolution example. This may impact the quality of the solution obtainable for examples with large amounts of noise.
For this example, we test LAP, VarPro, and BCD using only the discrete gradient regularizer due to its better performance in the 2D example. We use identical parameters to the 2D problem for the regularization parameter, bound constraints on the image variables, and accuracy of the iterative LSQR. The lone difference in the problem setup from the 2D case is in solving the linear system for VarPro. Instead of the $20$ fixed LSQR iterations from the 2D case, we run a fixed number of $50$ iterations to achieve the required accuracy for the larger system. For all three methods, the reduced system in the motion is solved using Cholesky factorization on the normal equations.
The resulting solution image for LAP and the relative error plots for all three methods for the images and motion parameters can be found in Figs. \[fig:3D\_SuperRes\_Images\] and \[fig:3D\_SuperRes\_RE\], respectively. Like the 2D super-resolution example, LAP converges faster to the motion parameters in the early iterations than BCD and VarPro and succeeds in reaching lower relative errors for both the recovered image and motion parameters. Again, VarPro’s iterations are far more expensive in terms of matrix-vector multiplications and CPU time as seen in Table \[tab:3D\_SuperRes\_Table\]. BCD performs similarly with LAP in terms of the reconstructed image, but it does less well at recovering the motion parameters and is slightly more expensive in terms of CPU time due to a higher number of function calls.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![This figure shows 3D volume renderings of the reconstructed images using LAP for the 3D super resolution problem with 2% noise. The volume on the left shows the initial guess, $\bfx_{initial}$. On the right is the reconstructed solution using LAP, $\bfx_{LAP}$ with the discrete gradient regularizer.[]{data-label="fig:3D_SuperRes_Images"}](SR3D_results.pdf "fig:"){width=".95\textwidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![This figure plots the relative errors for both the reconstructed image and the motion parameters for the 3D super resolution for $2$% added noise. LAP succeeds in capturing the correct motion parameters in fewer iterations than VarPro and BCD and in recovering images of comparable quality.[]{data-label="fig:3D_SuperRes_RE"}](3D_SuperRes_Plots_b.pdf){width="\textwidth"}
-- --------------------- -------------- ------------------ ------------------ -------------- ----------------
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + $\nabla_h$ 14.1 [**6.23e-2**]{} [**8.65e-4**]{} [**22.1**]{} [**2.30e3**]{}
VarPro + $\nabla_h$ 16.4 7.21e-2 9.55e-4 1.79e3 1.23e4
BCD + $\nabla_h$ [**11.3**]{} 6.25e-2 1.40e-3 25.7 4.26e3
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + $\nabla_h$ 15.3 [**6.35e-2**]{} [**9.27e-4**]{} [**21.0**]{} [**2.15e3**]{}
VarPro + $\nabla_h$ 18.0 7.57e-2 1.00e-3 1.95e3 1.22e4
BCD + $\nabla_h$ [**12.2**]{} 6.37e-2 1.59e-3 26.2 3.75e3
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + $\nabla_h$ 16.2 [**6.48e-2**]{} [**8.97e-4**]{} [**20.3**]{} [**2.94e3**]{}
VarPro + $\nabla_h$ 17.2 8.08e-2 9.52e-4 1.88e3 1.07e4
BCD + $\nabla_h$ [**11.7**]{} 6.50e-2 1.53e-3 25.2 4.21e3
-- --------------------- -------------- ------------------ ------------------ -------------- ----------------
: This table presents data for the solution of the 3D super resolution for multiple values of added Gaussian noise. The columns from left to right give the stopping iteration, relative error of the solution image, relative error of the solution motion, number of matrix-vector multiplies during optimization, and time in seconds using `tic` and `toc` in `MATLAB`. All values are averages taken from $10$ separate problems with different motion parameters, initial guesses, and noise realizations.[]{data-label="tab:3D_SuperRes_Table"}
MRI Motion Correction {#sub:MRI_motion_example}
---------------------
The final test problem is a two-dimensional MRI motion correction problem. The goal in this MRI application is to reconstruct a complex-valued MRI image from its Fourier coefficients that are acquired block-wise in a sequence of measurements. Since the measurement process typically requires several seconds or minutes, in some cases the object being imaged moves substantially. Motion renders the Fourier samples inconsistent and — without correction — results in artifacts and blurring in the reconstructed MRI image. To correct for this, one can instead view the collected MRI data as a set of distinct, complex-valued Fourier samplings, each measuring some portion of the Fourier domain and subject to some unknown motion parameters. The resulting problem of recovering the unknown motion parameters for each Fourier sampling and combining them to obtain a single motion-corrected MRI image fits into the coupled imaging framework presented in this paper.
The forward model for this problem was presented by Batchelor [*et al.*]{} [@BatchelorEtAl2005]. In their formulation, our imaging operator $\bfK$ is again block diagonal with diagonal blocks $K_k$ for $k = 1,2,\ldots,N$ given by $$K_k = \bfA_k \mathcal{F} \bfS.$$ Here, $\bfS$ is a complex-valued block rectangular matrix containing the given coil sensitivities of the MRI machine, $\mathcal{F}$ is block diagonal with each block a two-dimensional Fourier transform (2D FFT), and $\bfA_k$ is a block diagonal matrix with rectangular blocks containing selected rows of the identity corresponding to the Fourier sampling for the $k$th data observation. As with the other examples, the imaging operator $\bfK$ is multiplied on the right by the block rectangular matrix $\bfT$ with $T(\bfy(\bfw_k))$ blocks modeling the motion parameters of each Fourier sampling. We note that the cost of matrix-vector multiplications by this imaging operator is dominated by the 2D FFTs in block $\mathcal{F}$, and we note that for $32$ receiver coils, the cost of a single matrix-vector multiplication will require $32N$ 2D FFTs of size $128 \times 128$ where $N$ is the number of Fourier samplings in the data set. Additionally, we note that the presence of these FFT matrices prevents us from explicitly storing the matrix and necessitates passing it as a function call for all of the methods. This also applies to the Jacobian with respect to the image, $\bfJ_x$. However, because it is relatively small in size, $\bfJ_w$ can still be computed and stored explicitly.
We use the dataset provided in the `alignedSENSE` package [@CorderoEtAl2016] to set up an MRI motion correction with a known true image and known motion parameters. To this end, we generate noisy data by using the forward problem . The ground truth image with resolution $128 \times 128$ is rotated and shifted by a random 2D rigid body transformation. The motion affected image is then observed on $32$ sensors with known sensitivities. Each of these $32$ observations is then sampled in Fourier space. For our problem, each sampling corresponds to $1/16$ of the Fourier domain, meaning that $N = 16$ samplings (each with unknown motion parameters) are needed to have a full sampling of the whole space. We sample using a Cartesian parallel two-dimensional sampling pattern [@CorderoEtAl2016]. Noise is added to the data using the formula $$\begin{array}{lll}
\bfd = \bar{\bfd} + \epsilon & \text{ where } & \epsilon = \mu \frac{\|\bar{\bfd} \|_\infty}{\| \bfn \|_2} \bfn.
\end{array}$$ Here, $\bar{\bfd}$ is the noise free data, $\mu$ is the percentage of noise added, and $\bfn$ is a complex-valued vector where the entries of $\text{Re}(\bfn)$ and $\text{Im}(\bfn)$ are normally-distributed random numbers with mean $0$ and standard deviation $1$. We run the problem for $\mu = 5$%, $10$%, and $15$% added noise. The resulting data has dimension $\textstyle{\frac{ 128 \times 128 \times 32}{16}} \times 16 = 524,288$. The MRI motion correction optimization problem fits within the framework in Eq. \[eq:optProb\] and has $16,432$ unknowns corresponding to $16,384$ for the image and $48$ for the motion.
As with previous examples, we solve the MRI motion correction problem using LAP, VarPro, and BCD. The setup and parameters are similar to those of the 2D super-resolution. We use the discrete gradient and `HyBR` as regularization options for LAP and BCD, and for VarPro we regularize using the discrete gradient operator and the identity operator. For the non-hybrid regularizers, we fix $\alpha = 0.01$. For the least-squares problems in the image variables for LAP and BCD, we solve using LSQR with an identical tolerance to the super-resolution examples. For VarPro, we use LSQR with a tolerance of $\num{e-8}$ or a maximum of $100$ iterations to solve to maintain accuracy in the gradient. Cholesky factorization on the normal equations is used for the lower-dimensional solves in the motion. No bound constraints are applied to any of the methods for this example because element-wise bound constraints on the real or imaginary parts of the complex-valued image variables will affect the angle (or phase) of the solution, which is undesirable.
For an initial guess for the motion parameters, we start with $\bfw = 0$ (corresponding to a relative error of $100$%). Using this initialization, we solve a linear least-squares problem to get an initial guess for the image. For $10$% added Gaussian noise in the data, the initial guess for the image has a relative error of around $35$%. We show the initial guess in Fig. \[fig:MoCoMRI\_Images\].
LAP, VarPro, and BCD provide fairly accurate reconstructions of both the image and motion parameters for quite large values of noise using either HyBR or the identity as a regularizer; see Figs. \[fig:MoCoMRI\_Images\] and \[fig:MoCoMRI\_Plots\]. This is likely due to the fact that the problem is not severely ill-posed and is highly over-determined (32 sensor readings for each point in Fourier space.) For this example, the hybrid regularization approach for LAP and BCD produces the best results, with LAP requiring considerably fewer iterations. We remark that the best regularization from this problem differs from the super-resolution problems, which shows the importance of the flexibility that LAP offers for regularizing the image. The comparative speed of LAP is observable for the relative error plots for the problem with $10$% noise and further evidenced in Table \[MoCoMRI\_Table\] for all noise levels over $10$ separate realizations of the problem. For the gradient-based regularizer, all three methods do not recover the motion parameters accurately. We also note that the number of iterations and their cost is an important consideration for this problem. Because of the distance of the initial guess from the solution, this problem requires more iterations than the super-resolution examples. Additionally, the high number of 2D FFTs required for a single matrix-vector multiplication makes multiplications by the Jacobian $\bfJ_x$ and $\bfJ_w$ expensive. Table \[MoCoMRI\_Table\] shows that LAP outperforms VarPro and BCD for both choices of regularizer by requiring fewer, cheaper iterations in terms of both time and matrix-vector multiplications. The difference in cost is most dramatic when compared with VarPro again due to the large number of FFTs required for a single matrix-vector multiplication and the large number of such multiplications required within each VarPro function call. For BCD and LAP, the number of matrix-vector multiplications is similar, but BCD requires more iterations for convergence. Overall, we see that LAP is a better choice for this problem and that it provides better reconstructions of both the image and motion in fewer, cheaper iterations.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![This figure shows a single inverted data sampling (*left*), the initial guess for the image (*center*), and reconstructed image (*right*) for the MRI motion correction problem with 10% noise. The solution image shown is for LAP using the `HyBR` regularizer. Note that the MRI images are the modulus of the complex-valued images recovered.[]{data-label="fig:MoCoMRI_Images"}](MoCoMRI_Pics_d.pdf "fig:"){width=".95\textwidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![This figure shows the relative errors for both the reconstructed image and the motion parameters for the MRI motion correction problem for three levels of noise. LAP with hybrid regularization achieves better reconstructions of the image and motion parameters for fewer iterations than either VarPro and BCD.[]{data-label="fig:MoCoMRI_Plots"}](MoCoMRI_Plots_b.pdf){width="\textwidth"}
-- --------------------- -------------- ------------------ ------------------ ---------------- ----------------
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` 76.6 [**3.55e-3**]{} [**1.59e-4**]{} [**3.18e2**]{} 5.86e2
LAP + $\nabla_h$ [**76.2**]{} 3.74e-3 1.67e-4 3.62e2 [**4.32e2**]{}
VarPro + $\bfI$ 116.0 7.93e-2 4.32e-2 2.19e4 1.20e4
VarPro + $\nabla_h$ 115.9 7.92e-2 4.33e-2 2.19e4 1.52e4
BCD + `HyBR` 128.6 4.35e-2 2.27e-2 4.03e2 1.01e3
BCD + $\nabla_h$ 116.6 7.83e-2 4.20e-2 4.27e2 8.59e2
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` 76.8 [**6.56e-3**]{} [**3.25e-4**]{} [**3.11e2**]{} 6.52e2
LAP + $\nabla_h$ [**76.1**]{} 6.88e-3 4.15e-4 3.55e2 [**5.19e2**]{}
VarPro + $\bfI$ 116.0 8.08e-2 4.33e-2 2.19e4 1.12e4
VarPro + $\nabla_h$ 116.0 8.08e-2 4.32e-2 2.19e4 1.27e4
BCD + `HyBR` 128.2 4.53e-2 2.21e-2 3.89e2 1.24e3
BCD + $\nabla_h$ 116.0 8.02e-2 4.21e-2 4.12e2 1.05e3
Iter. Rel. Err. $\bfx$ Rel. Err. $\bfw$ MatVecs. Time(s)
LAP + `HyBR` 77.3 [**9.49e-3**]{} [**4.69e-2**]{} [**3.07e2**]{} 5.02e2
LAP + $\nabla_h$ [**75.0**]{} 1.61e-2 2.92e-2 3.40e2 [**3.61e2**]{}
VarPro + $\bfI$ 115.8 8.29e-2 4.36e-2 2.18e4 9.98e3
VarPro + $\nabla_h$ 115.7 8.28e-2 4.35e-2 2.18e4 1.27e4
BCD + `HyBR` 127.0 4.80e-2 2.21e-2 3.47e2 1.06e3
BCD + $\nabla_h$ 129.0 8.20e-2 4.17e-2 3.99e2 8.17e2
-- --------------------- -------------- ------------------ ------------------ ---------------- ----------------
: This table shows the results of LAP, VarPro, and BCD for solving the MRI motion correction example for multiple regularizers and varying levels of added noise. Averaged over $10$ realizations of the problem, the columns are stopping iteration, the relative error of the solution image, the relative error of the solution motion, number of matrix-vector multiplies during optimization, and time in seconds using `tic` and `toc` in `MATLAB`. LAP outperforms the other methods in terms of solution quality, computational cost, and CPU time.[]{data-label="MoCoMRI_Table"}
Summary and Conclusion {#sec:summary}
======================
We introduce a new method, called Linearize And Project (LAP), for solving large-scale inverse problems with coupled blocks of variables in a projected Gauss–Newton framework. Problems with these characteristics arise frequently in applications, and we exemplify and motivate LAP using joint reconstruction problems in imaging that aim at estimating image and motion parameters from a number of noisy, indirect, and motion-affected measurements. By design, LAP is most attractive when the optimization problem with respect to one block of variables is comparably easy to solve. LAP is very flexible in the sense that it supports different regularization strategies, simplifies imposing equality and inequality constraints on both blocks of variables, and does not require (as in the case of VarPro) that the forward problem depends linearly on one set of variables. In our numerical experiments using four separable nonlinear least-squares problems, we showed that LAP is competitive and often superior to VarPro and BCD with respect to accuracy and efficiency.
LAP is as general as alternating minimization methods such as Block Coordinate Descent. However while BCD ignores the coupling between the variable blocks when computing updates, LAP takes it into consideration. Thus, in our experiments LAP requires a considerably smaller number of iterations, matrix-vector multiplications, and CPU time than BCD to achieve similar accuracy.
LAP is not limited to separable nonlinear least-squares problems and thus more broadly applicable than VarPro. Since LAP projects after linearization, it provides the opportunity to freely choose which block of variables gets eliminated. For example, in our numerical examples in Sec. \[sub:two\_dimensional\_example\]–\[sub:MRI\_motion\_example\], LAP eliminates the parameters associated with the motion (which are of comparably small dimension) when computing the search direction in the projected Gauss–Newton scheme. Due to the robustness of Gauss–Newton methods, it suffices to solve the imaging problem iteratively to low accuracy. By contrast, VarPro eliminates the image variables that enter the residual in a linear way. While this leads to a small-dimensional nonlinear optimization problem for the motion, each iteration requires solving the imaging problem to relatively high accuracy to obtain reliable gradient information; see also Sec \[sub:varpro\]. This can be problematic for large and ill-posed imaging problems, and thus LAP can in some cases reduce runtimes by an order of magnitude; see Tables \[tab:3D\_SuperRes\_Table\] and \[MoCoMRI\_Table\].
It is worth noting that the key step of LAP, which is the projection onto one variable block when solving the approximated Newton system, is a block elimination, and the reduced system corresponds to the Schur-complement.
To allow for comparison with VarPro we focussed on separable nonlinear least-squares problems. In future work, we will study the performance of LAP to solve general coupled nonlinear optimization problems.
ACKNOWLEDGEMENT {#sec:acknowledgements}
===============
This work is supported by Emory’s University Research Committee and National Science Foundation (NSF) awards DMS 1522760 and DMS 1522599.
[^1]: Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia, USA. ([{jlherri,nagy,lruthotto}@emory.edu]{})
|
Optical forces are useful in the manipulation of ultra-fine particles and mesoscopic systems, and the development is rather astounding in the last three decades. The most well known types of the optical forces are the radiation pressure and the optical gradient force. There is also an inter-particle optical force, induced by the multiple scattering of light.[@Lin:1; @Burns:1989; @Tatarkova:2002; @and:2002; @Antonoyiannakis:1997; @Singer:2003; @Chaumet:2001] We present here an interesting type of resonant inter-particle force. We will see that the tuning of the incident light frequency to the Morphology Dependent Resonance (MDR) of a cluster of transparent microspheres would induce a strong resonant optical force (MDR-force) between the spheres. The MDR of a pair of spheres had been observed in fluorescent[@Mukaiyama:1999; @Rakovich:2004] and lasing[@Hara:2003] experiments. Here we study theoretically the force induced by such resonances. We will see that the MDR-induced force, derived from the coherent coupling of the whispering gallery modes (WGM’s), is a strong short ranged force that can be attractive or repulsive depending on whether the bonding mode (BM) or the anti-bonding mode (ABM) is excited. The strength of the optical forces can be enhanced by orders of magnitude when a MDR is excited. As microsphere cavities are emerging as an alternative to the photonic crystal in controlling light,[@Mukaiyama:1999; @Rakovich:2004; @Hara:2003] the MDR-force may be deployed for the manipulation of a microsphere cluster.
In this paper, we calculate the electromagnetic (EM) forces acting on microspheres when WGM’s or MDR’s are excited. The optical force acting on a microsphere can be computed via a surface integral of the Maxwell stress tensor, $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}}\over
{T}} $, over the sphere’s surface. The microspheres cannot respond to the high frequency component of the time varying optical force, so we calculate the time-averaged force $<\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{F}}>=\oint { <
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}}\over
{T}}> \cdot
d\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{S}}}$. The EM field required in evaluating $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}}\over
{T}} $ is computed by the multiple scattering theory,[@Lin:1; @Appl:1995] which expands the fields in vector spherical harmonics. This formalism is quite possibly the most accurate method that can be applied. It is in principle exact, and the numerical convergence is being controlled by the maximum angular momentum $(L_{max})$ used in the expansion. The calculation for the resonance of dielectric microspheres near contact requires a high $L_{max}$,[@Miyazaki:2000] which is chosen so that further increase in $L_{max}$ does not change the value of the calculated force. In most of the calculations, the size parameter (*kR*) is between 28 and 29, and $L_{max}$=63 was used. We adopt the Generalized Minimal Residual iterative solver (GMRES) for the linear system of equations.[@Fraysse:2003] In the following, the WGM’s will be labeled as “($l)$TE($n)$” or “($l)$TM($n)$”, where $l$ and $n$ are the mode and order number, and TE (TM) means transverse electric (magnetic) respectively. Unless otherwise noted, a linearly polarized incident plane wave with a modest intensity of 10$^{4}$ W/cm$^{2}$ is assumed throughout this paper. The spheres have radius $R$=2.5 $\mu $m, with a dielectric constant *$\varepsilon $*=2.5281+10$^{ - 4}i$. The loss level of *Im*[{]{}*$\varepsilon $*[}]{}=10$^{ - 4}$ or smaller can be easily achieved with insulators, glass or possibly good quality polystyrene spheres.
![(a): The radiation pressure for a sphere with $\varepsilon
$=2.5281. (b)-(c): Optical forces acting on two contiguous microspheres ($\varepsilon $=2.5281+10$^{ - 4}i$), with configuration depicted in inset (d), with Panel (b) for the upper sphere and Panel (c) for the lower sphere. (d): A pair of contiguous spheres illuminated by a linearly polarized plane wave propagating along the bisphere ($z$) axis.[]{data-label="fig1"}](MDR1.eps){width="3.26in" height="2.35in"}
The well-known WGM’s for a transparent microsphere have many interesting properties and applications, mostly because of its high quality factor and the enhanced EM fields near the surface. While the fields can be enhanced by orders of magnitude when a WGM is excited, the radiation pressure is only increased by about 30[%]{} or less, as shown in Fig. \[fig1\](a). It is because the intensity distribution of a WGM is symmetrical, so that the gradient force acting on the sphere at any point is cancelled by its counterpart on the other side of the sphere. However, a much stronger enhancement in the optical force can be induced by the resonances involving two spheres. When two spheres are near each other, their EM modes are coherently coupled and split into BM’s and ABM’s through the quasi-normal mode splitting.[@Antonoyiannakis:1997; @Miyazaki:2000] The BM’s (ABM’s) have resonant frequencies that are lower (higher) than that of the single sphere, and have an even (odd) parity in the EM field distribution.[@Miyazaki:2000] Unlike the single sphere resonance where the force is not enhanced that much, the MDR’s correspond to strong attractions (BM’s) or repulsions (ABM’s) between the spheres. The overall intensity distribution of the two-sphere resonance is still symmetrical, but the field pattern on each sphere is not. The strong internal fields then induce strong optical forces on the spheres. We note that the BM and ABM forces are also observed between layers of 2 dimensional photonic structure.[@Antonoyiannakis:1997]
In Fig. \[fig1\](b)-(c) we plot the optical forces acting on a pair of spheres with the geometry shown in Fig. \[fig1\](d). The wavelengths of the incident light fall inside the range of 542 nm to 561 nm, chosen to match with that of the previous works on MDR.[@Miyazaki:2000; @Fuller:1991] The BM and ABM of 39TE1 and 34TM2 are marked on Fig. \[fig1\](b). When a resonance is excited, the force is tremendously enhanced compared to off-resonance. The BM’s (ABM’s) have the maximum (minimum) field intensity at the contact point of the spheres, giving rise to attractions (repulsions). The resonant linewidths of the MDR are also several orders of magnitude wider than that of a single sphere,[@Miyazaki:2000]$^{, }$[@Fuller:1991] and they are further broadened by absorption. We remark that the small peak at *kR*=28.03 in Fig. \[fig1\](b)-(c) is the ABM of 34TE2, and also the interactions between 39TM1 and 35TE2 complicated the splitting, and their coupling give rise to the MDR-force peaks at *kR*=28.527, 28.605 and 28.620.
. The horizontal axis is the size parameter of the bottom sphere. Solid lines: both spheres have radius of 2.5 $\mu $m. Dotted lines: The bottom sphere has radius of 2.5 $\mu $m and the top sphere has radius of 2.45 $\mu $m.[]{data-label="fig2"}](MDR2.eps){width="2.5in"}
One of the major challenges in studying MDR of spheres experimentally is that the resonant frequency is very sensitive to the size of the sphere and thus requires extremely accurate particle sizing.[@Fuller:1991] This difficulty has been overcome by utilizing the narrow linewidth of the single sphere resonance to determine the particle size.[@Mukaiyama:1999; @Rakovich:2004; @Hara:2003] Nevertheless, the MDR force is actually quite robust against size dispersion. The solid line in Fig. \[fig2\] shows the MDR force at *kR*=28.527 when the two spheres are of the same diameter, to be compared with the forces in which the two spheres differ by 2[%]{} in diameter (dotted line). We see that the MDR force remains significant even when the two spheres do not have the same radius.
. $\varepsilon $=2.5281+10$^{ - 4}i$. Only the force acting on top sphere is plotted.[]{data-label="fig3"}](MDR3.eps){width="2.5in"}
Figure \[fig3\] shows the forces acting on a pair of spheres over a wide range of size parameters. From this figure, we see that the attractive resonant force is generally stronger than the repulsive resonant force. The resonant force is most significant for spheres with size parameters between 20 and 30. The force for those with size parameters greater than 30 is damped by absorption.
![Optical force acting on a pair of spheres plotted as a function of $D$, the separation between the closest points on the spheres. The forces acting on the spheres are equal and opposite by symmetry, with positive force represents repulsion and vice versa. The positions of the spheres are $(0, 0, -D/2-R)$ and $(0, 0, D/2+ R)$. The incident wave has the form $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{E}} _{in} = \hat {x}E_o \sin (kz)$. The 39TE1 resonance of a single sphere is at $\lambda $=558.6 nm. $\vert $F$_{vdw}\vert $ is an upper bound of the magnitude of the van der Waals force. (a): Ideal case with no absorption, i.e. $\varepsilon $=2.5281. (b) $\varepsilon $=2.5281+10$^{ -
4}i$. The stable equilibrium separations (optical force equals zero and stable against perturbation) for different incident wavelength are marked by arrows.[]{data-label="fig4"}](MDR4.eps){width="3.18in" height="2.76in"}
The MDR frequencies actually depend on the distance between the spheres, and this property can be utilized to bind the spheres into a stable structure. As an illustrative example, we consider a pair of spheres (aligned along $z$-axis) illuminated by an incident field of the form $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over
{E}} _{in} = \hat {x}E_o \sin (kz)$, which compose of a pair of counter-propagating waves going along the bisphere axis. At a particular frequency of the incident wave, slightly higher than the resonant frequency of a WGM, the ABM is excited at a particular distance between the spheres, leading to strong repulsion. However, at distances larger than that particular distance, both the radiation pressure and the Van der Waals forces will push the balls together. This competition between ABM resonant repulsion and other attractive forces lead to the stable position. Figure \[fig4\] shows the force as a function of $D$, the separation between the closest points on the spheres. The dielectric constant is taken to be 2.5281+10$^{ - 4}i$ in Fig. \[fig4\](b), and the ideal case results with no absorption (*$\varepsilon $*=2.5281) are shown in Fig. \[fig4\](a) for comparison. Stable equilibrium separations, where the optical force is zero, are marked by arrows in Fig. \[fig4\](b). The spheres will experience an attractive (repulsive) force if their separation is increased (decreased) from the equilibrium distance. Binding can also be achieved by using two lasers, one tuned to a BM and the other tuned to an ABM such that there is an equilibrium separation “sandwiched” by the resonant force peaks. The interaction between the two laser beams can be neglected because of the lack of coherence.
We also compare the MDR-force with other relevant interactions. The energy associated with the repulsive barriers created by the ABM’s are on the order of tens of $k_{b}T$ (the thermal energy at room temperature) at an incident intensity of 10$^{4}$ W/cm$^{2}$. For example, it takes about 80 $k_{b}T$ to push the spheres across the middle peak of Fig. \[fig4\](b) (corresponding to $\lambda $=558.2 nm). Another relevant comparison is the strength of the van der Waals forces. An upper bound on the magnitude of the van der Waals force between two dielectric spheres, $\vert F_{vdw}\vert $, can be calculated by the non-retarded approximation: $\vert F_{vdw} (D)\vert \le AR / 12D^2$, where $A$=6.6$\times
$10$^{ - 20}$ Joule is the Hamaker constant.[@See:1991] The magnitude of the van der Waals force is plotted on Fig. \[fig4\]. One sees that the resonant force can dominate over the van der Waals force if the $D$ is more than a few tens of nano-meter. Finally, the weight of a glass sphere (mass density =2400 kg/m$^{3})$ is about 1.5 pN.
We note from Fig. \[fig4\] that the resonant separation (where the force is maximum) increases as the incident frequency is tuned closer to the resonant frequency of the WGM. This can be understood from the fact that a larger separation corresponds to a smaller splitting of the WGM. In the ideal case with no absorption (see \[fig4\](a)), the strength of the MDR-force is an increasing function of resonant separation. This is because the quality factor, and thus the internal field of the MDR, attains the huge values of the WGM as the separation increases.[@Miyazaki:2000] We note that resonant force for the ideal case approaches a nano-Newton. However, in reality the resonances are inevitably subject to absorptive losses.
We emphasize that the properties of the resonant mode is determined by the morphology. As long as the incident frequency matches the resonant frequency, the resonance will be excited irrespective of the external light profile. However, it is the projection (coupling) of the incident light onto the resonating mode that determines the strength of the resonat force. A plane wave is in fact not the most efficient way to excite the MDR, as most of the light is coupled to the non-resonating, dissipative modes. Our calculations aim to illustrate the resonant behavior and the corresponding strong optical forces. In actual implementation, other form(s) of incident light wave (e.g. evanescent wave) can be used to realize a stronger force and thereby to utilize the full potential of the resonant effect. We also note that while absorption will degrade the strength of the resonance, microspheres containing gain materials can in principle enhance the resonant force, and the effect should be most interesting when the WGM starts lasing.[@Hara:2003] These would be interesting topics for further studies.
Support by Hong Kong RGC through CA02/03.SC05 and HKUST6138/00P is gratefully acknowledged. Zhifang Lin is also supported by CNKBRSF and NNSF of China. C.T. Chan’s e-mail address is phchan@ust.hk.
[99]{} J. Ng, Z.F Lin, C.T. Chan and P. Sheng, “Photonic Clusters”, available at http://arXiv.org/abs/cond-mat/0501733. M.M. Burns, J-M Fournier and J.A. Golovchenko, *Phys. Rev. Lett.* **63**, 1233 (1989). S.A. Tatarkova, A.E. Carruthers and K. Dholakia, *Phys. Rev. Lett.* **89**, 283901 (2002). H. Xu and M. Kall, *Phys. Rev. Lett.* **89**, 246802 (2002). M.I. Antonoyiannakis and J.B. Pendry, *Europhys. Lett.* **40**, 613 (1997). W. Singer, M. Frick, S. Bernet and M. Ritsch-Marte, *J. Opt. Soc. Am. B* **20**, 1568 (2003). P.C. Chaumet and M. Nieto-Vesperinas, *Phys. Rev. B* **64**, 035422 (2001). T. Mukaiyama, K. Takeda, H. Miyazaki, Y. Jimba and M. Kuwata-Gonokami, *Phys. Rev. Lett.* **82**, 4623 (1999). Y.P. Rakovich, J.F. Donegan, M. Gerlach, A.L. Bradley, T.M. Connolly, J.J. Boland, N. Gaponik and A. Rogach, *Phys. Rev. A* **70**, 051801 (2004). Y. Hara, T. Mukaiyama, K. Takeda and M. Kuwata-Gonokami, *Opt. Lett.* **28**, 2437 (2003). Y.L. Xu, *Appl. Opt.* **34** 4573 (1995). H. Miyazaki and Y. Jimba, *Phys. Rev. B* **62**, 7976 (2000). V. Fraysse, L. Giraud, S. Gratton and J. Langou, CERFACS Technical Report TR/PA/03/3 (2003). K.A. Fuller, *Appl. Opt.* **30**, 4716 (1991). See e.g., J. Israelachvili, *Intermolecular and Surface Forces*, 2$^{nd}$ ed. (Academic Press, London, 1991).
|
---
abstract: 'The primary aim of this paper is to provide a simple and concrete interpretation of Cartan geometry by pointing out that it is nothing but the mathematics of idealized waywisers. Waywisers, also called hodometers, are instruments traditionally used to measure distances. The mathematical representation of an idealized waywiser consists of a choice of symmetric space called a [*model space*]{} and represents the ‘wheel’ of the idealized waywiser. The geometry of a manifold is then completely characterized by a pair of variables $\{V^A(x),A^{AB}(x)\}$, each of which admit simple interpretations: $V^A$ is the point of contact between the waywiser’s idealized wheel and the manifold whose geometry one wishes to characterize, and $A^{AB}=A_\mu^{\ AB}dx^\mu$ is a connection one-form dictating how much the idealized wheel of the waywiser has rotated when rolled along the manifold. The familiar objects from differential geometry (e.g. metric $g_{\mu\nu}$, affine connection $\Gamma^\rho_{\mu\nu}$, co-tetrad $e^I$, torsion $T^I$, spin-connection $\omega^{IJ}$, Riemannian curvature $R^{IJ}$) can be seen as merely different characterizations of the change of contact point. We then generalize this waywiser approach to relativistic spacetimes and exhibit action principles for General Relativity in terms of the waywiser variables for two choices of model [*spacetimes*]{}: De Sitter and anti-De Sitter spacetimes. In one approach we treat the contact vector $V^A$ as a non-dynamical [*à priori*]{} postulated object, and in another $V^A$ is treated as a dynamical field subject to field equations of its own. We do so without the use Lagrange multipliers and the resulting equations are shown to reproduce Einstein’s General Relativity in the case of vacuum.'
author:
- 'H.F. Westman[^1] and T.G. Zlosnik[^2]'
bibliography:
- 'references.bib'
title: 'Gravity, Cartan geometry, and idealized waywisers'
---
Introduction {#intro}
============
Riemannian geometry forms the mathematical basis of Einstein’s General Relativity. The metric representation of Riemannian geometry consists of the pair of variables $\{g_{\mu\nu},\Gamma^\rho_{\mu\nu}\}$. While the symmetric metric tensor $g_{\mu\nu}$ encodes all information of distances between points on a manifold, the affine connection $\Gamma^\rho_{\mu\nu}$ encodes the information of parallel transport of tangent vectors $u^\mu$ as well as defining a covariant derivative $\nabla_\mu$ acting on tensors. Within Riemannian geometry, not all pairs $\{g_{\mu\nu},\Gamma^\rho_{\mu\nu}\}$ are allowed, and two conditions are imposed:
- [**Metric compatibility:**]{} $\nabla_\rho g_{\mu\nu}=\partial_\rho g_{\mu\nu}-\Gamma^\sigma_{\rho\mu}g_{\sigma\nu}-\Gamma^\sigma_{\rho\nu}g_{\mu\sigma}=0$
- [**Zero torsion:**]{} $T^\rho_{\mu\nu}\equiv\Gamma^\rho_{\mu\nu}-\Gamma^\rho_{\nu\mu}=0$.
The affine connection can then be uniquely determined from the metric $$\begin{aligned}
\Gamma^\rho_{\mu\nu}=\frac{1}{2}g^{\rho\sigma}(\partial_\mu g_{\sigma\nu}+\partial_\nu g_{\mu\sigma}-\partial_\sigma g_{\mu\nu})$$ and it becomes natural to view the metric as the primary variable and the affine connection as a secondary derived quantity. Metric-compatibility admits a crisp geometric interpretation: an affine straight line $X^\mu(\lambda)$ (i.e. an affine geodesic) between two points $X^\mu(\lambda_1)=x_1^\mu$ and $X^\mu(\lambda_2)=x_2^\mu$ is also the metrically shortest path (or longest in the case of timelike paths) between two points (and [*vice versa*]{}), $$\begin{aligned}
\frac{\delta}{\delta X^\rho}\int\sqrt{g_{\mu\nu}(X(\lambda))\dot{X}^\mu\dot{X}^\nu}d\lambda=0\qquad\Leftrightarrow \qquad\frac{dX^\mu}{d\lambda}+\Gamma^\rho_{\mu\nu}\dot{X}^\mu\dot{X}^\nu\propto \dot{X}^\mu\end{aligned}$$ where $\dot{X}^\mu\equiv \frac{dX^\mu}{d\lambda}$ and $\frac{\delta}{\delta X}$ denotes a variational derivative with $\delta X(\lambda_1)=\delta X(\lambda_2)=0$. However, this condition does not fix the non-symmetric part of the connection, i.e. the torsion. This is related to the fact that affine geodesics, and consequently also celestial motion, are unaffected by the presence of torsion.
As is well-known, the existence of fermionic matter in nature has immediate implications for the mathematical representation of the gravitational field. When it comes to coupling a spinor field to the gravitational field it is known that the metric representation (as defined above) is unsuitable. The fundamental reason for this is that a spinor constitutes a finite-dimensional spin-half representation of the Lorentz group while the affine connection is $GL(4)$-valued, a group which admits no finite-dimensional spinorial representation [@cartans]. Instead, whenever fermionic matter is present the metric representation is shunned and the gravitational field is instead mathematically represented in terms of a pair of $\mathfrak{so}(1,3)$-valued one-forms: the co-tetrad $e^I=e^I_\mu dx^\mu$ and the spin connection $\omega^I_{\ J}=\omega^{\ I}_{\mu\ J}dx^\mu$ (see for example [@Trautman:2006fp]). Given the notion of the spin connection one-form, one may define a linear covariant exterior derivative of a spinor $\psi$: $$\begin{aligned}
D\psi=d\psi-\frac{i}{2}\omega_{IJ}S^{IJ}\psi.\end{aligned}$$ where $S^{IJ}=-\frac{i}{4}[\gamma^I,\gamma^J]$. On the other hand, a linear covariant derivative cannot be defined for a spinor within the metric representation. Absent a satisfactory solution of the mathematical problem the existence of fermions pose, we must therefore discard the metric representation as a viable mathematical representation of the gravitational field.
The Einstein-Palatini-Dirac action for a minimally coupled massive Dirac field coupled to gravity, which is the starting point for a quantum theory of spin-1/2 particles in curved spacetimes, can be written $$\begin{aligned}
{\cal S}_{E-D}[e^{I},\omega^{IJ},\psi]&=&\int\mathcal{L}_P+\mathcal{L}_D=\int \kappa \epsilon_{IJKL}e^I\wedge e^J\wedge (R^{KL}-\frac{\Lambda}{6}e^K\wedge e^L)\nonumber \\&+&\epsilon_{IJKL}(e^I\wedge e^J\wedge e^K\wedge \bar{\psi}\gamma^L D\psi-me^I\wedge e^J\wedge e^K\wedge e^L\bar{\psi}\psi).\label{EinsteinDirac}\end{aligned}$$ where $R^{IJ}\equiv d\omega^{IJ}+\omega^I_{\ K}\wedge \omega^{KJ}$ is the Riemannian curvature two-form and $\Lambda$ is the cosmological constant. The remainder of the paper rests heavily on the calculus of forms. In order to increase readability among tensor-minded physicists we have included several appendices with the necessary techniques and tools of exterior calculus. For example, in Appendix \[tensorformtrans\] we recall how to translate between the the Palatini action ${\cal S}_P=\int \epsilon_{IJKL}e^I\wedge e^J\wedge R^{KL}$ written in terms of forms into the usual Einstein-Hilbert action ${\cal S}_{EH}=\int d^{4}x\sqrt{-g}R$.
The simple action (\[EinsteinDirac\]) (which is [*polynomial*]{} in the basic variables) leads in general to non-Riemannian spacetime geometries. Specifically, in the case of non-vanishing spin-density ${\cal J}_{IJ}$, defined by $\delta_\omega\int {\cal L}_D\equiv\int \omega^{IJ}\wedge {\cal J}_{IJ}$, implies to non-zero torsion $T^I\equiv de^I+\omega^I_{\ J}\wedge e^J\neq0$ thus violating the zero-torsion condition of Riemannian geometry. However, we note that the Dirac spinor, contrary to the Maxwell field for example, does not represent any classical field observed in nature. Rather it is only its quantized version that corresponds to fermionic matter. Nevertheless, it is expected that the effects of torsion as predicted by a suitable phenomenological theory including spin density are going to be too small to be measured currently [@Mao:2006bb] and so we may regard the gravitational part of (\[EinsteinDirac\]) as a theory having the same experimental support as General Relativity and thus treat it as a legitimate theory of gravity.
The method of using the pair of one-forms $\{e^I,\omega^{IJ}\}$ to represent the gravitational field is old and due to Elie Cartan [@cartane; @SharpeCartan; @Wise:2006sm]. This method has its roots in Cartan’s original conception of differential geometry based on symmetric spaces called [*model spaces*]{} and [*rolling connections*]{} [@SharpeCartan]. The first aim of this paper is to present Cartan geometry as the mathematics of [*idealized waywisers*]{}. Waywisers were traditionally used to measuring distances between various places, see Fig \[waywiser\]. The traditional waywiser device is simply a rotating wheel and a ‘clock’ recording how much the wheel has turned. In this way an approximation of the distance covered is obtained. In a more abstract sense, this device is something that can roll along a path on some surface and in doing so yield information about the geometry of the surface (in this case the distance). We will show that a generalization of this device, here denoted an *idealized waywiser*, is capable of probing not just distances but also the nature of the curvature of a surface via its rolling along paths. It shall be seen that the mathematics of this is indeed that of Cartan geometry. In order to put emphasis on the implicit underlying geometric picture in terms of idealized waywisers we shall refer it as *Cartan waywiser geometry*).
The article is organized as follows: In Section \[waywisermath\] we develop the mathematical theory of idealized waywisers. In order to facilitate visualization and build intuition, we first restrict attention to the case of two-dimensional manifolds embedded in a three-dimensional space. It is shown that all the basic mathematical objects of Riemannian geometry is recoverable from the mathematical objects that describe the idealized waywiser, the so-called [*waywiser variables*]{}. Furthermore, it is shown that torsion and ‘metric compatibility’ admit a simple interpretation in terms of the behaviour of the idealized waywiser. The notion of waywisers and the manner in which they probe geometry is immediately generalizable to manifolds of higher dimension. In Section \[higherd\] we discuss the generalization of Cartan waywiser geometry to the physically important case of four dimensional spacetime manifolds.[^3] In Section \[standnotation\] we clarify the relationship between the waywiser variables and the variables aforementioned variables $e^{I}$ and $\omega^{IJ}$. In Section \[actions\] we implement these ideas by formulating action principles for gravitation for which the ‘gravitational field’ is characterized entirely by waywiser variables. It is found that vacuum General Relativity may, by different mechanisms, be recovered for both constrained and unconstrained variation of the waywiser variables. Finally in Section \[conclusions\] we present our conclusions and suggest areas for further exploration.
\[waywiser\]
Introducing Cartan waywiser geometry {#waywisermath}
====================================
In this section we shall develop the mathematics of idealized waywisers. In this conception of differential geometry both metric and affine connection are [*derived*]{} concepts and constructed from the more basic waywiser variables whose geometric interpretation is rather straightforward. Let us see how this works.
The mathematics of idealized waywisers {#whatarewaywisers}
--------------------------------------
Just as in the case of Riemannian geometry it is helpful for the sake of intuition to first invoke an embedding space. Consider then a two-dimensional surface embedded in a three-dimensional Euclidean space and some choice of coordinates $x^a$, $a=1,2$. One may imagine ‘paths’ $x^{a}(\lambda)$ on this surface. We define a waywiser as a device which one may attempt to ‘roll’ along a path $x^{a}(\lambda)$ and in doing so yield information about the geometry of the surface. The amount of information that may be obtained will depend on the particular nature of the waywiser. The traditional waywiser depicted in Fig. \[waywiser\] is suitable for measuring physical distances along paths $x^{a}(\lambda)$ on certain surfaces but is otherwise limited by the requirement that it may only roll along any path along the direction tangent to its wheel. A more general notion of a rolling object is a sphere of radius $\ell$. For example, one may imagine a process of rolling such a sphere around a closed path ${\cal C}$. Upon returning the sphere may differ from its original, starting state by an arbitrary rotation, i.e. an $SO(3)$ transformation, which of course is a more general transformation than a traditional waywiser is capable of whilst staying in contact with the surface.
We shall be concerned with what we call [*idealized waywisers*]{} with symmetric spaces as representing the ‘wheels’. These are ‘Platonic’ creations of the mind where all irrelevant features, inherent in their material incarnations, have been stripped and abstracted away. For example, no features in the embedded surface may obstruct or hinder the rolling of the idealized waywiser, see Figure \[ghostwaywiser\].
The first feature of an idealized waywiser is that it has a contact point between itself and the two-dimensional surface being probed. Such a point of contact is itself a point on the sphere. It is then convenient to represent the contact point by a [*contact vector*]{} $V^i$ satisfying $V^iV^j\delta_{ij}=\ell^2$ where $\delta_{ij}=diag(1,1,1)$. The Latin index $i=1,2,3$ of the contact vector $V^i$ refers to the three-dimensional Euclidean space.
Picture now a sphere on top of all the points of the two-dimensional surface. For each sphere we have a contact point which is represented by a vector $V^i$. We note that the contact vector only depends on how the surface is embedded in the three-dimensional Euclidean space and is therefore the same regardless how the waywiser got there. In fact, the contact vector is always normal to the embedded surface. Thus, it is then appropriate to introduce a [*field*]{} of contact vectors $V^i(x)$ for all the points on the surface. The contact vector $V^i(x)$ at some point $x^a$ we visualize as having its origin in the center of the sphere at the same point $x^a$.
The second feature of the ideal waywiser is a prescription for how the sphere is rotated when rolled from one point to another along some path. Since it is a sphere the transformation group is $SO(3)$. Thus, the rolling of the waywiser corresponds to a succession of infinitesimal $SO(3)$ transformations. Mathematically these infinitesimal transformations can be specified by a connection $A_{a\ j}^{\ i}$ with values in the Lie algrabra of $SO(3)$.[^4] By feeding this connection an infinitesimal displacement $dx^a$ we obtain an infinitesimal rotation $\delta \Omega^i_j=\delta^i_j-dx^aA_{a\ j}^{\ i}$ [^5]. This infinitesimal rotation characterizes mathematically the infinitesimal ‘response’ of the idealized waywiser and how the point of contact consequently is altered. As we shall now see, all of differential geometry is contained in that change of contact point!
We will show in the following section that the notion of ideal waywiser, realized via the fields $\{V^{i},A^{ij}\}$, is sufficient to recover the familiar tensors of Riemannian geometry. More specifically, all objects in differential geometry can be understood as ways of characterizing the [*change in the contact point*]{} when the idealized waywiser is rolled.
Constructing the metric tensor and affine connection {#contructmetricaffine}
----------------------------------------------------
Let us now determine the distance between two neighboring points $x_1^a$ and $x_2^a$ on the surface. In our mind’s eye we now picture an idealized waywiser at $x_1$. Before that ball is rolled we imagine a stick of length $\ell$ attached to the ball, with one end in the center of the ball and the other at the contact point $V^i(x_1)$. We denote this ‘stick-vector’ $V^i_|$ which per definition coincides with the contact vector at $x_1$, i.e. $V^i_|(x_1)=V^i(x_1)$. Next we roll the ball in the direction $dx^a=x_2^a-x_1^a$ and put it to rest at $x_2^a$. Rolling the ‘stick-vector’ is mathematically understood as a succession of infinitesimal $SO(3)$ transformations $\delta \Omega^i_j=\delta^i_j-dx^aA_{a\ j}^{\ i}$ acting on $V^i_|$. Thus we have
$$\begin{aligned}
V^i_|(x_2)=\delta \Omega^i_jV^j_|(x_1)=(\delta^i_j-dx^aA_{a\ j}^{\ i})V^j_|(x_1)=V^i(x_1)-dx^aA_{a\ j}^{\ i}V^j(x_1)\end{aligned}$$
where $A_{a\ j}^{\ i}V^j$ is the $\mathfrak{so}(3)$-valued one-form dictating how much the ball has rotated and which was introduced in the previous section. Next, we can compare the rolled ‘stick-vector’ $V^i_|(x_2)$ with the contact vector $V^i(x_2)$ at $x_2$ and compute the difference $\delta V^i\equiv V^i(x_2)-V^i_|(x_2)$: $$\begin{aligned}
\delta V^i\equiv V^i(x_2)-V^i_|(x_2)=V^i(x_2)-(V^i(x_1)-dx^a A_{a\ j}^{\ i}V^j(x_1))=dx^{a}\partial_aV^i+dx^a A_{a\ j}^{\ i}V^j(x_1)\equiv dx^aD_a V^i.\end{aligned}$$ where we have introduced the gauge covariant derivative $D_a V^i\equiv\partial_aV^i+A_{a\ j}^{\ i}V^j$. The difference $\delta V^i$ represents the change in contact point. We note that because the contact vector satisfies $V^2=\ell^2$, we have $\delta_{ij}V^iDV^j=0$ and the object $dx^aD_aV^i$ therefore has no normal component and belongs to the tangent space of the surface at $x_1$. We now identify the distance $ds$ between the two points $x_1$ and $x_2$ as the Euclidean norm of the difference $\delta V^i$, or equivalently $$\begin{aligned}
ds^2=\delta_{ij}\delta V^i \delta V^j=dx^adx^b\delta_{ij}D_aV^i D_bV^j\end{aligned}$$ The metric tensor $g_{ab}$, encoding all information about distances of the surface, can then be defined as $$\begin{aligned}
\label{metricdef}
g_{ab}=\delta_{ij}D_aV^i D_bV^j.\end{aligned}$$ We have now understood how distances, and in particular the metric tensor, can be recovered from the waywiser variables $\{V^i,A^{ij}\}$. In particular, we see that the metric directly corresponds to the change of contact point when the waywiser is rolled. However, the metric tensor cannot tell us how to parallel transport tangent vectors, $u^a$ say, along the surface, something which is encoded in the affine connection $\Gamma^c_{ab}$. Nevertheless, also this mathematical object can easily be constructed from the waywiser variables and corresponds loosely to the [*rate*]{} of change of the contact vector. More specifically, the object $D_aD_bV^i=\partial_a D_bV^i+A_{a\ j}^{\ i}D_bV^j$ contains components which are both normal and tangential to the embedded surface. It is easily checked that the normal component is the metric. It is in the tangential part that we can identify an affine connection $\Gamma^a_{bc}$. Thus we define $$\begin{aligned}
\label{affinedef}
P^i_{\ j}D_aD_b V^j\equiv\Gamma^c_{ab}D_cV^i.\end{aligned}$$ where and $P^i_{\ j}\equiv\delta^i_j-\frac{1}{\ell^2}V^iV_j$ is a projector. We note that, as should be the case, both the left- and right-hand side do not transform as tensors. We see that the affine connection can be recovered from the waywiser variables $\{V^i,A^{ij}\}$ and consequently all the information of how to parallel transport tangent vectors. In addition, we recover the covariant derivative $\nabla_a$ acting on tensors from which we can construct the Riemann curvature tensor $R^a_{\ bcd}$. We see that all the objects of Riemannian geometry can be extracted, if needed, from the waywiser variables.
It is quite pleasing to see that both metric and affine connection, which play two distinct mathematical roles in Riemannian geometry, can be constructed from the more primary variables $\{V^i,A^{ij}\}$ which themselves admit a crisp geometric interpretation in terms of idealized waywisers. In a sense we can say that going from Riemannian geometry to Cartan waywiser geometry is an instance of unification since the metric tensor and affine connection, whose roles are conceptually and mathematically distinct, are seen merely as two aspects of the response of idealized mathematical waywiser when rolled. Indeed, all of differential geometry is now understood merely as different ways of characterizing the change of contact point that the waywiser undergoes when rolled. Therefore, it does not seem too preposterous to say that Cartan waywiser geometry, simply being the mathematics of easily visualized waywisers, is both conceptually and mathematically simpler than Riemannian geometry.
Abstract Cartan waywiser geometries
-----------------------------------
We can now forget about the embedding space which only served to facilitate visualization and helping intuition along. The situation is not different from Riemannian geometry where embedding spaces are invoked to facilitate visualization. The mathematical representation of an abstract Cartan waywiser geometry is simply the pair $\{V^i,A_{\ j}^{i}\}$ and no reference to an embedding space is required. From a mathematical point of view we see that we are dealing with a fiber bundle structure where the base space is the manifold, the fiber the sphere, and the structure group $SO(3)$. However, it is easier to work with a three-dimensional vector $\mathbb{R}^3$ space as the fiber instead of the two-dimensional sphere $S^2$. The contact point is then represented by a contact vector $V^i\in\mathbb{R}^3$ subject to the constraint $V^2=\ell^2$ and the variable $A_{\ j}^{i}$ is a gauge connection on that vector bundle. It should be clear that, although it is helpful to imagine embedding spaces, we can understand Cartan geometry abstractly in terms of this fiber bundle structure.
Metric compatibility and torsion {#metricitorsion}
--------------------------------
The space of all possible pairs $\{g_{ab},\Gamma^c_{ab}\}$ can be ‘coordinatized’ by the non-metricity tensor $Q_{cab}\equiv\nabla_cg_{ab}$ and the torsion tensor $T^c_{ab}\equiv\Gamma^c_{ab}-\Gamma^c_{ba}$ [@SchoutenRic]. Let us now consider the space of pairs $\{g_{ab},\Gamma^c_{ab}\}$ that can be be generated by the waywiser variables $\{V^i,A^{ij}\}$. We turn first to metricity. Given the expressions (\[metricdef\]–\[affinedef\]) for the metric tensor and affine connection we can compute $$\begin{aligned}
\nabla_{c}g_{ab}&\equiv&\partial_c g_{ab}-\Gamma^d_{ca}g_{db}-\Gamma^d_{cb}g_{ad}=\partial_c g_{ab}-(\Gamma^d_{ca}D_dV^iD_bV^j+\Gamma^d_{cb}D_aV^iD_dV^j)\delta_{ij}\nonumber\\
&=&\partial_{c}g_{ab}-(P^i_{\ k}D_cD_aV^kD_bV_i+P^j_{\ k}D_aV_iD_cD_bV^k)\delta_{ij}=\partial_{c}g_{ab}-D_c(D_aV^iD_bV^j)\delta_{ij}\nonumber\\
&=&\partial_{c}g_{ab}-D_c(D_aV^iD_bV^j\delta_{ij})=\partial_{c}g_{ab}-\partial_{c}g_{ab}\equiv0\nonumber\end{aligned}$$ where we made use of the fact that $P^i_{\ j}D_aV^j=D_aV^i$ and that the gauge group is the orthogonal group $SO(3)$ so that $D_a\delta_{ij}=0$. Thus we see that metric compatibility $\nabla_cg_{ab}=0$ is deduced and not postulated. It is a consequence of the fact that we are dealing with rolling a sphere with symmetry group $SO(3)$ whose gauge connection satisfies $A^{ij}=-A^{ji}$.
Let us now turn to torsion and its geometrical interpretation within our approach. Note that there is no guarantee that the affine connection as defined by is symmetric. Indeed, its antisymmetric part is given by $$\begin{aligned}
\label{torsiondef}
F^{\ \ i}_{ab\ j}V^j=P^i_{\ k}F^{\ \ k}_{ab\ j}V^j\equiv P^i_{\ j}[D_a,D_b]V^j=(\Gamma^a_{bc}-\Gamma^a_{cb})D_aV^i\equiv T^a_{bc}D_aV^i\end{aligned}$$ where $T^a_{bc}$ is the torsion tensor. We note however that submanifolds of $\mathbb{R}^N$ have all zero torsion. In fact, non-vanishing torsion signals that we are not dealing with Riemannian geometries. Nevertheless, in the Cartan waywiser geometry, torsion has a very simple geometric interpretation. The left-hand-side of represents mathematically how much the contact vector has changed when parallel transported around an [*infinitesimal*]{} closed loop. We see that torsion is merely a particular aspect of the $SO(3)$ curvature $F^i_{\ j}$. In fact, Riemannian curvature and torsion are aspects of the same thing: it is the non-integrability of the $SO(3)$ connection. This unification of torsion and Riemannian curvature is very pleasing and will suggest a natural modification of the gravitational field equations as we shall see in Section \[holst\].
Waywisers for General Relativity {#higherd}
================================
Now that we have gained some intuition about Cartan geometry and its geometric interpretation in terms of idealized waywisers, we turn to General Relativity. To accommodate spacetime geometries and relativistic theories we must adapt the above waywiser formalism accordingly. From a mathematical point of view the obvious change to make is to make use of symmetric spacetimes, rather than spaces, as idealized waywiser ‘wheels’. In the literature the symmetric spacetimes representing idealized relativistic waywiser wheels go by the name [*model spaces*]{} or [*model spacetimes*]{}. We shall from now on use those terms interchangeably.
In this article we will focus on two choices of model spacetimes: the De Sitter and anti-De Sitter spacetimes. We could also use a flat Minkowski spacetime as model spacetime. But this choice of model spacetime requires a slightly different mathematical representation [@Gronwald:1995em] of the contact point and we will not discuss that option in this paper [@Gronwald:1995em; @Wise:2006sm].
De Sitter spacetime as model spacetime
--------------------------------------
As a first mathematical realization of the idealized ‘relativistic wheel’, i.e. model spacetime, we consider the De Sitter spacetime defined by $$\begin{aligned}
-t^2+x^2+y^2+z^2+w^2=\ell^2\end{aligned}$$ which has the symmetry group $SO(1,4)$, and a spacelike contact vector $V^A$ also satisfying $V^AV^B\eta_{AB}=\ell^2$, $\eta_{AB}=diag(-1,1,1,1,1)$, where $A=0,\dots 4$. The spacetime waywiser is then represented by the pair $\{V^A(x),A^{AB}(x)\}$, where $A^A_{\ B}=A_{\mu\ B}^{\ A}dx^\mu$ is a $\mathfrak{so}(1,4)$-valued one-form, where $\mu=0,\dots 3$. The subgroup of transformations that leave the components of the spacelike contact vector $V^A$ invariant is just the Lorentz group $SO(1,3)$.
Anti-De Sitter spacetime as model spacetime {#antiwheel}
-------------------------------------------
Our second choice for model spacetime is the anti-De Sitter spacetime defined by $$\begin{aligned}
-t^2+x^2+y^2+z^2-w^2=-\ell^2\end{aligned}$$ which has the symmetry group $SO(2,3)$. The contact vector $V^A$ is timelike, rather than spacelike, and satisfies $V^AV^B\eta_{AB}=-\ell^2$ with $\eta_{AB}=diag(-1,1,1,1,-1)$. The pair $\{V^A(x),A^{AB}(x)\}$ denotes the spacetime waywiser variables where $A^A_{\ B}$ is a a $\mathfrak{so}(2,3)$-valued one-form. The subgroup which leaves the components of this timelike contact vector $V^A$ invariant is again the Lorentz group $SO(1,3)$.
In the following we will consider both model spacetimes simultaneously and so shall not make a notational distinction between the two $\eta_{AB}$’s, corresponding to De Sitter and anti-De Sitter model spacetimes. As for the signs, e.g. $V^2=\mp1$, that will appear from now on, we understand the upper sign as referring to the anti-De Sitter model spacetime and the lower to the De Sitter one.
Relation to standard notation {#standnotation}
=============================
The formalism and choice of mathematical variables in this article serves to highlight the idea that Cartan geometry is simply the mathematics of idealized waywisers. The Cartan waywiser formalism has inbuilt $SO(p,q)$ symmetry (with $(p,q)=(1,4)$ or $(2,3)$) and we can make use of that gauge redundancy to fix the contact vector to be everywhere equal to $V^A(x)\overset{*}{=}\ell \delta^A_4$ . For such a gauge choice we make contact with the more standard variables used in Cartan geometry. We can identify the co-tetrad $e^I$ and spin connection $\omega^{IJ}$ in the following way: $$\begin{aligned}
\label{standardrel1}
e^A\equiv DV^A=dA^A+A^A_{\ B}V^B\overset{*}{=}\ell A^{A4}=(e^I,0) \qquad \omega^{AB}\equiv h^A_{\ C}h^B_{\ D}A^{CD}\overset{*}{=}\left(\begin{array}{cc}\omega^{IJ}&0\\0&0\end{array}\right).\end{aligned}$$ where $h^A_{\ B}\equiv \delta^A_B-\frac{V^AV_B}{V^2}$ is a projector. We note that while the definition of the co-tetrad $e^A$ includes a gauge covariant exterior derivative, this is not the case for the spin-connection $\omega^{AB}$. This signals a significant mathematical difference between the two objects. In particular, while the spin connection $\omega^{AB}$ transforms inhomogeneously under a $SO(p,q)$ gauge transformation, the same is not true for the co-tetrad $e^A$. For this reason the co-tetrad $e^I$ cannot be thought of as a gauge connection in this context.[^6] Specifically, the co-tetrad should not be thought of as a gauge connection related to the ‘translational’ symmetry of the De Sitter or anti-De Sitter model spacetimes[^7]. Rather, the co-tetrad is best understood as the quantifying the change of contact point when the idealized waywiser wheel is rolled; something which is not a gauge quantity.
The $SO(p,q)$ curvature two-form $F^{AB}$ can be split into a projected part $h^A_{\ C}h^B_{\ D}F^{CD}$ and a normal part $F^{AB}V_B$. The projected curvature two-form is the Riemannian curvature $R^{IJ}$ two-form but ‘corrected’ by the curvature of the model spacetime, and the normal part is simply the torsion, i.e. we have $$\begin{aligned}
\label{standardrel2}
h^A_{\ C}h^B_{\ D}F^{CD}&\overset{*}{=}&\left(\begin{array}{cc}dA^{IJ}+A^I_{\ C}\wedge A^{CJ}&0\\0&0\end{array}\right)=\left(\begin{array}{cc}R^{IJ}\pm\frac{1}{\ell^2}e^I\wedge e^J&0\\0&0\end{array}\right)\\ T^A&\equiv& F^A_{\ B}V^B\overset{*}{=}(\ell F^I_{\ 4},0)=(T^I,0)\end{aligned}$$ where the sign as prescribed in section \[antiwheel\].
From the perspective developed in Section \[waywisermath\] we see that the gauge fixing, although useful, obscures the underlying geometric picture in terms idealized waywisers which are mathematically represented by both contact point $V^A$ and rolling connection $A^{AB}$. If we resist the temptation of immediately gauge $V^A$ ‘out of existence’, mathematical and conceptual clarity is increased. We now proceed to see under what circumstances gravitation may be understood as a theory of Cartan waywiser geometry.
Action principles for gravity {#actions}
=============================
The Einstein-Hilbert action ${\cal S}_{EH}=\int \sqrt{-g} g^{\mu\nu}R_{\mu\nu}d^4x$ is a rather complicated action. It is manifestly non-polynomial in its basic dynamical variable $g_{\mu\nu}$ (since it involves the square root $\sqrt{-g}$ of the metric determinant $g=det\ g_{\mu\nu}$) as well as the inverse metric $g^{\mu\nu}$. The action is further complicated by the fact that it contains second order partial derivatives with respect to the metric tensor. This makes it necessary to add, in the case of non-compact spaces, a compensating non-local boundary term in order to ensure that the Einstein-Hilbert action is indeed extremized whenever the field equations are satisfied [@Wald:1984rg].
On the other hand, the natural actions for General Relativity using the waywiser variables $\{V^A,A^{AB}\}$, are [*polynomial*]{} in the basic waywiser variables, and are, from a mathematical point of view, the simplest actions possible. This is due to the fact that the waywiser variables are all forms; $V^A$ is a zero-form and $A^{AB}$ a one-form. Since an action is per definition an integration over a four-form, the construction of the simplest actions possible in Cartan waywiser geometry is just an exercise in ‘wedging’ together the various forms we can construct from the waywiser variables.[^8] Building an action is very much like playing with Lego [@Lego]: You only have but a few basic pieces (the forms) and the only task is to find out how to fit the pieces together to create four-forms.
Before we start playing with waywiser forms, we note that where are two distinct approaches to obtain viable actions for gravity which are equivalent, at least in the vacuum case. These are:
- [**Non-dynamical:**]{} $V^A$ is regarded as a non-dynamical [*à priori*]{} postulated variable, also called an absolute object [@AndersonRelativity; @WestmanSonego2007b]. We simply pick some contact field $V^A(x)$ subject to the only constraint $\eta_{AB}V^{A}V^{B}=\pm \ell^2$. Neither are equations of motion given for the contact vector $V^A$ nor is it necessary. Diffeomorphism invariance is broken except in the special $SO(p,q)$ gauge in which $V^A=\ell\delta^A_4$.
- [**Dynamical:**]{} $V^A$ is regarded as a dynamical variable on par with $A^{AB}$ which have its own equations of motion and should be varied with respect to in an action principle. This requires a non-standard choice of action in order to ensure consistency with the standard Einstein vacuum field equations. This formulation is manifestly diffeomorphism invariant.
In the following we shall pursue both views. The following sections will make heavy use of the variational calculus of forms. For an exposition of all necessary ideas and techniques of the variational calculus of forms we point to Appendix \[variationalforms\].
A class of polynomial actions for gravity {#secgenaction}
-----------------------------------------
Let us then contemplate what kind of Lagrangian polynomial four-forms $\mathcal{L}$ may be constructed. To do that we should first list the basic building blocks we have at our disposal. These are [^9]
- the waywiser variables $\{V^A,A^{AB}\}$ from which the gauge covariant objects $F^{AB}$ and the one-form $DV^{A}$ can be constructed
- the ‘internal’ Minkowski metric $\eta_{AB}$ and Levi-Civita symbol $\epsilon_{ABCDE}$ associated with the orthogonal groups $SO(1,4)$ or $SO(2,3)$.
The most general polynomial gravitational action that can be constructed is $$\begin{aligned}
\label{genaction}
{\cal S}_{g}=\int a_{ABCD} F^{AB}\wedge F^{CD} + b_{ABCD} DV^A\wedge DV^B\wedge F^{CD}+c_{ABCD} DV^A\wedge DV^B\wedge DV^C\wedge DV^D
\label{actione}\end{aligned}$$ where $$\begin{aligned}
a_{ABCD} &=& a_{1}\epsilon_{ABCDE}V^{E}+a_{2} V_{A}V_{C}\eta_{BD} +a_{3} \eta_{AC}\eta_{BD} \\
b_{ABCD} &=& b_{1} \epsilon_{ABCDE}V^{E}+b_{2} V_{A}V_{C}\eta_{BD}+b_{3}\eta_{AC}\eta_{BD}\\
c_{ABCD} &=& c_{1}\epsilon_{ABCDE}V^{E}\end{aligned}$$ In general the quantities $a_{i},b_{i},c_{i}$ may depend on the scalar $V^{2}=V_{E}V^{E}$. We shall however restrict ourself to the case where they are just constants. Given this assumption, we note from (\[top1\]) that the $a_{3}$ term is topological and we see from that the the $a_{2}$ and $b_{3}$ terms are topologically equivalent; therefore in this case only five of the $a_{i},b_{i},c_{i}$ independently contribute to the equations of motion, namely $a_1,a_2,b_1,b_2$, and $c_1$.
The contact vector as non-dynamical absolute object {#nondynapproach}
---------------------------------------------------
In the non-dynamical view we regard the contact vector as postulated and not subject to equations of motion. In a generic $SO(p,q)$ gauge choice the field $V^A(x)$ breaks diffeomorphism invariance since $V^A(x)$ depends explicitly on the coordinate $x^\mu$. The situation is similar to a Klein-Gordon field in flat spacetime. The action ${\cal S}_{KG}$ contains a non-dynamical and [*à priori*]{} postulated symmetric tensor $\eta_{\mu\nu}$, subject to the requirement of being flat and having signature $+2$. We do not require any equations of motion for $\eta_{\mu\nu}$ and the action ${\cal S}_{KG}$ should not be varied with respect to $\eta_{\mu\nu}$ since that would only yield nonsensical equations.
However, while the Klein-Gordon theory is not diffeomorphism invariant, the diffeomorphism invariance of the waywiser action is restored in the particular $SO(p,q)$ gauge where $V^A(x)\overset{*}{=}\ell \delta^A_4$. There is thus an curious interplay between diffeomorphism and $SO(p,q)$ gauge invariance.
### The MacDowell-Mansouri action
Let us now consider actions appropriate within the non-dynamical view. The simplest action we can write down is known as the MacDowell-Mansouri action [@MacDowell:1977jt] $$\begin{aligned}
\label{MMactioneq}
{\cal S}_{MM}=\int\mathcal{L}_{MM}=\int\kappa\epsilon_{ABCDE}V^E F^{AB}\wedge F^{CD} \label{mmaction}\end{aligned}$$ and corresponds to only having $a_1$ non-zero in the general action . The equations of motion are obtained by varying [*only*]{} with respect to the connection $A^{AB}$ and not with respect to the contact vector $V^A$ which is here treated as a non-dynamical absolute object. In Appendix \[MMaction\] the variation is done in pedagogical detail and yields: $$\begin{aligned}
\label{MMeqs}
\epsilon_{ABCDE} DV^E\wedge F^{CD}=0.\end{aligned}$$ These polynomial equations, which are written in a rather succinct form, are equivalent to Einsteins field equations. To see this we impose the gauge choice $V^A=\ell \delta^A_4$, make use of the relations and , put $A=4,B=I$ and $A=I,B=J$ in equation which yields respectively the two equations $$\begin{aligned}
\epsilon_{IJKL} e^L\wedge (R^{JK}\pm\frac{1}{\ell^2}e^J\wedge e^K)&=&0\label{riemMM}\\
\frac{1}{\ell}\epsilon_{IJKL} e^L\wedge T^K&=&0\label{torsionMM}\end{aligned}$$ These equations may look unfamiliar but are nothing but the Einstein field equations with cosmological constant and the torsion-free condition. In Appendix \[standardform\] we show how in pedagogical detail how the equations of motion can be rewritten in tensor notation as $$\begin{aligned}
R_\mu^{\phantom{\mu}\nu}-\frac{1}{2}\delta_\mu^{\phantom{\mu}\nu} R+\frac{6}{\ell^2}\delta_\mu^{\phantom{\mu}\nu}=0 \qquad T^\rho_{\mu\nu}=0.\end{aligned}$$ Although the MacDowell-Mansouri action is the simplest possible action we can write down, it does not appear natural from a Cartan waywiser geometry point of view. As we noted in section \[metricitorsion\], torsion $T^A=F^A_{\ B}V^B$ in Cartan waywiser geometry is merely a particular aspect of the $SO(p,q)$ curvature and as such we would expect it to appear in a symmetric fashion in a gravitational action. However, the MacDowell-Mansouri action (\[MMactioneq\]) does not contain torsion since any normal component $F^A_{\ B}V^B$ of the curvature two-form is projected out by the factor $\epsilon_{ABCDE}V^E$ in the action. From a waywiser geometry point of view, there is therefore a strange asymmetry in the MacDowell-Mansouri action.
We further highlight this by fixing the $SO(p,q)$ gauge so that $V^A=\ell\delta^A_4$ in which case the MacDowell-Mansouri action can be rewritten as follows $$\begin{aligned}
\int\mathcal{L}_{MM}&=&\int\kappa\epsilon_{IJKL}\ell (R^{IJ}\pm\frac{1}{\ell^2}e^I\wedge e^J)\wedge (R^{KL}\pm\frac{1}{\ell^2}e^K\wedge e^L)\\
&=&\pm\int\kappa\epsilon_{IJKL}\ell \left(\frac{2}{\ell^2}e^I\wedge e^J\wedge R^{KL}\pm\frac{1}{\ell^4}e^I\wedge e^J\wedge e^K\wedge e^L\right).\end{aligned}$$ where the topological term $\epsilon_{IJKL} R^{IJ}\wedge R^{KL}$, known as the Pontryagin four-form, was discarded (see Appendix \[topologicalterms\]). This action is the standard Palatini action with positive or negative cosmological constant depending on the choice of model spacetime. Again we see that the MacDowell-Mansouri action contains only the $SO(1,3)$ Riemannian curvature and not torsion.
### The Holst action {#holst}
From the point of view of waywiser geometry a more natural-looking action can be obtained by adding an extra term which corresponds to $a_2$-term $DV_A\wedge DV_B\wedge F^{AB}$ defined in Section \[secgenaction\] and is known in the literature as the [*Holst term*]{} [@Holst:1995pc]. The resulting action is the starting point of loop quantum gravity and related to the Ashtekar formulation of gravity [@Ashtekar:1986yd; @Thiemann:2007zz].
The Holst action is $$\begin{aligned}
\label{Holstaction}
{\cal S}_{Holst}=\int\mathcal{L}_{Holst}=\int(\epsilon_{ABCDE}V^E + \beta V_AV_C \eta_{BD}) F^{AB}\wedge F^{CD}.\end{aligned}$$ In order for the units in the action to work out the dimension of $\beta$ is inverse length. The Holst term is topologically equivalent to the squared torsion term $T^A\wedge T_A$ since their difference is a exterior derivative of the three-form called the Nieh-Yan three-form, see Appendix \[topologicalterms\]. The Holst action therefore contains both Riemann curvature and torsion and we see that the Holst term have restored the asymmetry between Riemannian curvature and torsion of the MacDowell-Mansouri action. From a Cartan waywiser geometry perspective this is more natural since torsion and Riemannian curvature are merely two aspects of the $SO(p,q)$ curvature.
Since neither the Holst term nor the square torsion term $T^A\wedge T_A$ are topological we cannot simply add them without also changing the equations of motion. However, even though the Holst term changes the equations of motion, the predictions are equivalent to General Relativity when the spin density three-form ${\cal J}_{IJ}$ vanishes. Let us see how that comes about. The equations of motion are as in the MacDowell-Mansouri case obtained by varying only with respect to the connection $A^{AB}$ and not $V^A$. This yields $$\begin{aligned}
2\epsilon_{ABCDE} DV^E\wedge F^{CD}+\beta(DV_A\eta_{BD}V_C-DV_B\eta_{AD}V_C+V_A\eta_{BD}DV_C-V_B\eta_{AD}DV_C)\wedge F^{CD}=0\end{aligned}$$ Let us now look at these set of equations in the particular gauge $V^A=\ell\delta^A_4$. If we set $A=4,B=I$ and $A=I,B=J$ we respectively obtain the two equations $$\begin{aligned}
2\epsilon_{IJKL} e^L\wedge (R^{JK}\pm\frac{1}{\ell^2}e^J\wedge e^K)\pm\beta\ell D^{(\omega)}T_I&=&0\label{riemholst}\\
\pm4\epsilon_{IJKL} e^K\wedge T^L+\beta\ell(e_I\wedge T_J-e_J\wedge T_I)&=&0\label{torsionholst}\end{aligned}$$ where $D^{(\omega)}T^I\equiv dT^I+\omega^I_{\ J}\wedge T^J=F_{IJ}\wedge e^J$. By taking the ‘internal dual’ of the second equation , using the ‘internal’ Levi-Civita symbol $\epsilon_{MN}^{\phantom{MN} IJ}$, we obtain $$\begin{aligned}
\frac{1}{2}\epsilon_{MN}^{\phantom{MN} IJ}\left(\pm4\epsilon_{IJKL} e^K\wedge T^L+\beta\ell(e_I\wedge T_J-e_J\wedge T_I)\right)&=&\pm 4(e_M\wedge T_N-e_N\wedge T_M)+\beta\ell(\epsilon_{MNKL} e^K\wedge T^L) \nonumber \\
&=& 0 \label{dualeq}\end{aligned}$$ which looks almost like the original equation but with the numerical factor $\beta\ell$ appearing on the other term. This comes about because the two terms are essentially the duals of each other. Solving yields $$\begin{aligned}
\pm4( e_I\wedge T_J-e_J\wedge T_I)=-\beta\ell\epsilon_{IJKL} e^K\wedge T^L\end{aligned}$$ which we insert in which in turn yields $$\begin{aligned}
(16-\beta^2\ell^2)\epsilon_{IJKL} e^K\wedge T^L=0.\end{aligned}$$ Thus, if $\beta\ell\neq \pm4$ we obtain the equation $\epsilon_{IJKL} e^K\wedge T^L=0$ which is the same zero torsion equation obtained from the MacDowell-Mansouri action. Thus, we conclude that in the absence of fermionic matter torsion is again zero. After torsion has been removed from what remains is simply Einstein’s vacuum equations. Thus, the Holst action with $\beta\ell\neq\pm4$ reproduces the Einstein’s General Relativity. The degenerate case in which $\beta=\pm \frac{4}{\ell}$ will not be considered here.
It should be stressed that the Holst term [*does*]{} change the way fermionic matter couples to gravity and by changing the value of $\beta$ we get different behavior of the gravitational field inside spacetime regions with non-zero spin-density. The value of $\beta$ is therefore ultimately an experimental question.
As previously stated, the Holst action is more pleasing than the MacDowell-Mansouri action from a Cartan waywiser point of view. It does not appear natural that only the projected part of the $SO(p,q)$ curvature should appear in the action. After all, torsion is merely a special part of the curvature $F^{AB}$.
The contact vector $V^A$ as dynamical field {#dynapproach}
-------------------------------------------
We now explore the second approach wherein the contact vector $V^A$ is treated as as just another dynamical field, i.e. we require that the gravitational action is also stationary with respect to small variations of $V^{A}$. By turning the contact vector into a dynamical field we increase the number of field equations by five. It is therefore a possibility that the new field equations impose unreasonable constraints and narrowing the space of solutions accordingly. For example, if we consider the MacDowell-Mansouri action and regard $V^A$ as a dynamical field we obtain, by varying the action with respect to $V^A$, the five additional field equations $\epsilon_{ABCDE}F^{AB}\wedge F^{CD}=0$. It may be checked that this implies a restriction that the Pontryagin four form $\epsilon_{IJKL}R^{IJ}\wedge R^{KL}$ vanishes. Therefore the $V^{E}$ equations of motion merely restrict the solution space to be smaller than that of General Relativity rather than producing equations for $V^{E}$ itself.
Furthermore, in order for $V^A$ to be interpreted as representing a contact point (see Section \[waywisermath\]), and to reproduce the Einstein’s gravitational theory, it must satisfy $V^2=\mp\ell^2$. Since no restrictions are imposed [*à priori*]{} on the dynamical field $V^A$, the condition $V^2=\mp\ell^2$ must somehow be a consequence of the equations of motion. Of course, this can be achieved by simply adding a Lagrange multiplier to the MacDowell-Mansouri action or Holst action [@Stelle:1979va; @Pagels:1983pq; @Pagels:1982tc; @Randono:2010ym]: $$\begin{aligned}
{\cal S}_{\lambda}[\lambda,V^{A}] = \int \lambda \left(V^{2} \pm 1\right)\end{aligned}$$ where the sign is determined by the choice of model spacetime as prescribed in section \[antiwheel\]. Requiring that the action is stationary with respect to small variations of the Lagrange multiplier four-form $\lambda$ then produces the required fixed norm constraint. But this procedure is artificial since rather than enforcing equations of motion of dynamical variables, the equations of motion for $V^{E}$ simply amount to a definition of $\lambda$.
The problem to come up with natural action where $V^A$ is itself a dynamical field was labeled an open problem [@Randono:2010cq] and has inspired attempts at providing an action where $V^A$ can be regarded as a dynamical field, see e.g. [@Randono:2010ym]. We now show that such an action can be found among the general class of polynomial actions (\[actione\]): those for which only $b_{1}$ and $c_{1}$ are nonzero. No Lagrange multiplier is necessary and the constancy and sign of $V^2$ are consequences of the dynamical equations. The result applies to vacuum and how to include matter fields we leave as an open problem. In addition, it would be interesting with this result could also be generalized to include the Holst term.
### Equations of motion
Consider then the action $$\begin{aligned}
\label{bcaction}
{\cal S}_g[V^{A},A^{AB}]=\int b_1 \epsilon_{ABCDE}V^E DV^A\wedge DV^B\wedge F^{CD}+c_1\epsilon_{ABCDE}V^E DV^A\wedge DV^B\wedge DV^C\wedge DV^D\end{aligned}$$ The equations of motion for follow from requiring stationarity of the action under small variations of the fields $A^{AB}$ and $V^{A}$ yields: $$\begin{aligned}
\delta {\cal S}_{g}[V^{A},A^{AB}]&=& \int \left( \delta A^{AB} \wedge {\cal E}_{AB} + \delta V^{A} {\cal E}_{A}\right)=0\end{aligned}$$ where it has been assumed that both $\delta A^{AB}$ and $\delta V^{A}$ vanish on the boundary of integration and where we have defined $$\begin{aligned}
{\cal E}_{AF}&\equiv& 2\epsilon_{[A|BCDE}V_{|F]}V^{E}e^B\wedge\left(b_{1} F^{CD}+2c_{1}e^C\wedge e^D\right) -b_{1} \epsilon_{ABCDF}\left(e^B\wedge e^C+2V^{B}T^{C} \right)\wedge e^{D}\\
{\cal E}_E&\equiv& b_{1} \epsilon_{ABCDE}\left(3 e^A\wedge e^B+2V^{A}T^{B}\right)\wedge F^{CD} +c_{1} \epsilon_{ABCDE}\left(5e^A\wedge e^B+ 12V^{A}T^{B}\right)\wedge e^C\wedge e^D\end{aligned}$$ where we recall that $T^{B}\equiv F^{B}_{\phantom{B}C}V^{C}$ and $e^A\equiv DV^A$. The first equation ${\cal E}_{AF}=0$, obtained by varying the action with respect to $A^{AB}$, is a system of ten three-form equations, whilst the second equation ${\cal E}_E=0$, obtained by varying the action with respect to $V^{A}$, is a system of five four-form equations.
### Constancy and sign of $V^2$ deduced from equations of motion
No restriction has been placed so far on the norm of $V^{E}$, so solutions where it is constant and non-vanishing must arise from the equations of motion themselves. We will now show that this is the case. To do this we consider the equations $V^E{\cal E}_E=0$ and $e^A\wedge{\cal E}_{AF}V^F=0$ which after simplification takes the form $$\begin{aligned}
V^E{\cal E}_E&=& \epsilon_{ABCDE}V^{E} e^A\wedge e^B\wedge\left(3b_{1} F^{CD}+5c_{1} e^C\wedge e^D\right)=0\label{eq1}\\
e^A\wedge{\cal E}_{AF}V^F&=&\epsilon_{ABCDE}V^Ee^A\wedge e^B\wedge\left(b_1 e^C\wedge e^D-V^2(2c_1 e^C\wedge e^D+b_1 F^{CD})\right)=0\label{eq2}\end{aligned}$$ The first equation can now be used to eliminate the curvature two-form $F^{CD}$ in the second equation . This yields the equation $$\begin{aligned}
\left(b_1-\frac{c_1}{3}V^2\right)\epsilon_{ABCDE}V^{E}e^A\wedge e^B\wedge e^C\wedge e^D=0 \label{vev}\end{aligned}$$ and for non-degenerate co-tetrads $e^A$ we deduce that this equation is solved only if $$\begin{aligned}
V^2=\frac{3b_1}{c_1}=\mp\ell^2.\end{aligned}$$ Since $b_1$ and $c_1$ are constants we see that $V^2$ is constant. We also note that the sign of $V^2$ is determined by the relative sign of $b_1$ and $c_1$. This means that the dynamical equations also determine choice of the model spacetime. In the case where $b_1$ and $c_1$ have opposite sign we need to use the anti-De Sitter model spacetime and De Sitter spacetime for equal sign.
### Consistency with Einstein’s vacuum field equations
Within the dynamical approach there are five additional field equations associated with the variable $V^A$. It is therefore not clear whether this theory contains all the solutions of Einstein’s General Relativity. We shall demonstrate consistency with General Relativity in the case of vacuum and leave the inclusion of matter as an open problem.
If we impose the special gauge in which $V^{A}= \ell \delta^{A}_{4}$ and use the notation of Section \[standnotation\], the equations ${\cal E}_{IJ}$ becomes $$\begin{aligned}
0=b_{1}\ell\epsilon_{IJKL} T^{K} \wedge e^{L} \label{tors}\end{aligned}$$ which implies that the torsion tensor is zero. Let us now study the remaining equations ${\cal E}_{I}$ and see if they in any way restrict the solution space of General Relativity. After simplification we get $$\begin{aligned}
2b_1\epsilon_{IJKL}T^J\wedge\left(-R^{KL}\pm\frac{14}{\ell} e^K\wedge e^L\right)=0.\end{aligned}$$ However, since torsion must be zero by this equation does not impose any further restriction on the solution space. This demonstrates the equivalence of our action and Einstein’s General Relativity and we conclude that in the case of vacuum we can find actions in which the contact vector $V^A$ is one of the dynamical variables. The extent to which the inclusion of $a_{1}$, $a_{2}$, and $b_{2}$ may complicate the correspondence with vacuum General Relativity is an open question, as is that of the effect of a $V^{2}$ dependence upon coefficients in the action (\[actione\]) may have.
### Non-trivial relation between models spacetime and sign of cosmological constant
Next we consider the equation ${\cal E}_{4I}$ which takes the form $$\begin{aligned}
0=b_{1}\ell^2\epsilon_{IJKL}\left(e^{J}\wedge R^{KL} \mp\frac{4}{l^{2}} e^{J}\wedge e^{K}\wedge e^{L}\right) \label{vacein}\end{aligned}$$ These are the Einstein field equations with cosmological constant. However, while in the case of the MacDowell-Mansouri and Holst actions where the anti-De Sitter/De Sitter model spacetime is associated with a negative/positive cosmological constant (see Appendix \[standardform\] for details on how the cosmological constant is related to the waywiser radius $\ell$), the relationship in the our case is the [*opposite*]{} where we have $$\begin{aligned}
\Lambda_{SO(1,4)} &=& -\frac{24}{l^{2}} \\
\Lambda_{SO(2,3)} &=& +\frac{24}{l^{2}}\end{aligned}$$ With some hindsight it is perhaps not too surprising that there is no relationship in general between the choice of model spacetime and the sign of the cosmological constant. This should already be clear from the fact that we can add a $c_1$-term, defined above, to the action.
Conclusions and outlook {#conclusions}
=======================
In this article we have sought to develop a formulation of Cartan geometry in terms of the notion of idealized waywisers, described on an $n$ dimensional manifold completely in terms of an $SO(p,q)$ connection $A^{AB}$ (where $p+q=n+1$) and a contact point represented by a ‘contact vector’ $V^{A}$. We have called these variables waywiser variables as they encode the response (i.e. the change of contact point) of an idealized waywiser when rolled along paths on the manifold. It was shown that a host of objects familiar from differential geometry, e.g $\Gamma^\rho_{\mu\nu}$, tetrad $e^I_{\mu}$, spin-connection $\omega^{IJ}_{\mu}$, Riemannian curvature $R_{\mu\nu\rho}^{\phantom{\mu\nu\rho}\sigma}$, torsion $T^\rho_{\mu\nu}$, may be recovered from the waywiser variables.
We stressed that General Relativity can be formulated in two distinct ways: one in which the contact vector $V^A$ is treated as a non-dynamical and [à priori]{} postulated object, and a second one in which the contact vector is viewed on a similar footing as the connection $A^{AB}$. To our knowledge the proposed dynamical method in Section \[dynapproach\] of recovering vacuum General Relativity from an $SO(2,3)$ or $SO(1,4)$ gauge theory is a new one, featuring variations of $V^{E}$ that are unconstrained by Lagrange multipliers. This should be contrasted to previous treatments of actions resembling (\[actione\]) [@Stelle:1979va; @Pagels:1983pq]. The non-vanishing value of $V^{2}$ is ensured by the equation , rather than more familiar methods such as $V^{2}\neq 0$ corresponding to a local minimum of a potential. This latter possibility was explored in [@Wilczek:1998ea] though as part of a framework which breaks diffeomorphism invariance; additionally this approach likely involves a dependence of $V^{2}$ on spacetime coordinates even in the absence of matter fields.
Diffeomorphism invariance is often taken to be the key symmetry group associated with Einstein’s General Relativity. However, as noted in Section \[nondynapproach\], within the non-dynamical approach diffeomorphism invariance is optional and is broken in generic $SO(p,q)$ gauges. This follows immediately from the fact that $V^A(x)$ is an [à priori]{} fixed function on the manifold and therefore explicitly depends on spacetime coordinates $x^\mu$. Only in the particular gauge in which $V^A=\ell\delta^A_4$ is diffeomorphism invariance restored since $V^A$ becomes independent of the spacetime coordinates. In this regard there are two views of the non-dynamical approach that should be considered. One view is that we continue to insist that diffeomorphism invariance should be a fundamental symmetry of nature and in particular of gravitational theories. This would lead to the rejection of the non-dynamical approach in favor of the dynamical one in which diffeomerphism invariance is manifest. Another view would be to reject the idea that diffeomorphism invariace should be regarded as a fundamental symmetry group of gravitational theories. Instead we may adopt the idea that the fundamental symmetry group of gravity is that of ‘rolling’ prescribed by a gauge connection $A^{AB}$ with values in the Lie algebra $\mathfrak{so}(p,q)$.
Of course, this would require us to understand how matter fields are altered by such a ‘rolling’ and this brings us to the the task of including matter fields within Cartan waywiser geometry. In the spirit of our approach, matter actions must be constructed as integrals of spacetime four-forms constructed from the matter fields and the waywiser variables. Perhaps surprisingly, this appears to be possible at least insofar as recovery of the equations of motion of scalar, spinor, and Yang-Mills fields goes [@Pagels:1983pq; @Ikeda:2009xb; @Westman:2012nn]. In this context, the appropriate interpretation of a field $Y^{A}$ is as a *spacetime scalar field* [@Pagels:1983pq; @Westman:2012nn] (e.g. a Klein-Gordon field). Note that no concept of ‘inverse-metric’ is fundamental at the level of the action here nor does it seem appropriate to require non-degeneracy of the metric since the field equations are valid also in the degenerate cases. Whether these actions may be combined with the action (\[actione\]) to give a realistic picture of classical gravitation remains an open question.
The geometric interpretation of gravity as Cartan waywiser geometry has hinged on the constancy of the norm $V^{2}$. However, in the presence of general matter content we may imagine that the equations of motion of the variables $V^{A}$ and $A^{AB}$ are sourced such that $V^{2}$ maintains the desired sign but experiences a variation over spacetime: $ V^{2} = \mp e^{2\phi(x^{\mu})} l^{2}$. Consequently, the metric tensor $g_{\mu\nu}$ takes the following form: $$\begin{aligned}
g_{\mu\nu} &=& \eta_{AB}D_{\mu}V^{A}D_{\nu}V^{B}\\
&=& e^{2\phi}\left(\eta_{IJ} e^{I}_{\mu} e^{J}_{\nu} \mp l^{2}\partial_{\mu}\phi \partial_{\nu}\phi\right)\end{aligned}$$\
This amounts to a *disformal* relation between the metric tensor $g_{\mu\nu}$ and the tensor $\eta_{IJ}e^{I}_{\mu}e^{J}_{\nu}$. In the present framework, matter is expected to couple to $DV^{A}$ [@Pagels:1983pq; @Ikeda:2009xb; @Westman:2012nn], and so will couple disformally to the co-tetrad $e^{I}$. The idea of disformal couplings has been an area of recent activity in cosmology [@Magueijo:2008sx; @Magueijo:2010zc; @Zumalacarregui:2010wj; @Koivisto:2008ak; @Kaloper:2003yf; @Bekenstein:2004ne; @Skordis:2005xk]; it would be interesting to see whether variation of $V^{2}$ over spacetime may have a phenomenological role.
We end by noting that the idea of a waywiser can be generalized to include larger groups. The key feature of a waywiser is that it has a point of contact and a connection that dictates how that point of contact has changed when rolled along some path on the manifold. In this respect it would be interesting to generalize Cartan waywiser geometry to the conformal group $C(1,3)$.
**Acknowledgements:** We would like to thank E. Anderson and A. Randono for discussions.
Exterior calculus
=================
Exterior calculus constitutes a powerful tool in differential geometry and this paper makes ample use of it. In order to make this paper more accessible and self-contained we provide in the following appendices a crash-course in exterior calculus. The various operations, i.e. wedge product, exterior derivative, integration, are defined in such a way that they can be easily understood in terms of tensor operations seen in elementary textbooks in General Relativity.
Definition of forms
-------------------
In a nutshell, forms are completely anti-symmetric covariant tensors. For example, a scalar $\Phi$ is a zero-form, a connection $A_\mu$ is a one-form, a curvature tensor $F_{\mu\nu}=-F_{\nu\mu}$ is a two-form. In general, we say that a completely antisymmetric covariant tensor of rank $(0,p)$ is a $p$-form. The number $p$ is called the degree of the form. If the manifold dimension is $N$ then no completely antisymmetric covariant tensor exists with more indices than $N$ and consequently no $p$-forms exists if $p>N$. In contradistinction to tensors we see that the number of types of forms is limited by the manifold dimension. Since the index structure of forms is simple and completely specified by its degree $p$ it is convenient to leave out the tensor indices. For example a $p$-form $\Omega_{\mu_1\mu_2\dots\mu_p}$ is written simply as $\Omega$.
Exterior algebra
----------------
Next we define a way of multiplying forms together that preserve the antisymmetry. This product is called the [*wedge product*]{} and is denoted $\wedge$. Let $\Omega_1$ and $\Omega_2$ be two forms of degree $p$ and $q$ respectively. Then the wedge product $\Omega_1\wedge\Omega_2$ is a new form of degree $p+q$. The basic idea of the wedge product is very simple and can be understood in terms of tensor methods as follows:
1. Write the forms as covariant tensors: $\Omega_{1\mu_1\mu_2\dots\mu_p}$ and $\Omega_{2\mu_1\mu_2\dots\mu_q}$
2. Multiply them as tensors: $\Omega_{1\nu_1\dots\nu_p}\Omega_{2\nu_{p+1}\dots\nu_{p+q}}$
3. Antisymmetrize: $\frac{(p+q)!}{p!q!}\Omega_{[1\nu_1\dots\nu_p}\Omega_{2\nu_{p+1}\dots\nu_{p+q}]}$.
The last object defines the $p+q$-form $\Omega_1\wedge\Omega_2$ with tensor indices explicit. The following formal properties of the wedge product can easily be deduced. Let $\Omega_1$, $\Omega_2$, and $\Omega_3$ be a $p$-form, $q$-form, and $r$-form respectively, and $\alpha$ and $\beta$ real- or complex numbers.
- Linearity: $(\alpha\Omega_1+\beta\Omega_2)\wedge \Omega_3=\alpha\Omega_1\wedge\Omega_3+\beta\Omega_2\wedge\Omega_3$
- Commutation law: $\Omega_1\wedge\Omega_2=(-1)^{pq}\Omega_2\wedge\Omega_1$ where $\Omega_1$ is a $p$-form and $\Omega_2$ is a $q$-form.
- Associativity: $\Omega_1\wedge(\Omega_2\wedge\Omega_3)=(\Omega_1\wedge\Omega_2)\wedge\Omega_3$
The wedge-product of the two forms $\Omega_1$ and $\Omega_2$, of degree $p$ and $q$ say, produces a new form $\Omega_3=\Omega_1\wedge\Omega_2$ of degree $p+q$. Thus, if $p+q>N$ then $\Omega_1\wedge\Omega_2\equiv0$. The above rules defines the exterior algebra of forms.
Exterior differentiation
------------------------
Next we define a coordinate independent derivative operator, called the [*exterior derivative*]{}, for forms that preserve the complete antisymmetry and generates from a $p$-form $\Omega$ a new form $d\Omega$ with degree $p+1$. The partial derivative $\partial_\mu$ will not do since: 1) it is coordinate dependent when acting on a $p$-form with $p>0$ and 2) it takes us out of the space of forms, i.e. completely antisymmetric tensors. The basic idea of the exterior derivative is simple and amounts to carrying out the following steps.
1. Write the form as a covariant tensor: $\Omega_{\mu_1\mu_2\dots\mu_p}$
2. Take the partial derivative: $\partial_{\mu_{p+1}}\Omega_{\mu_1\mu_2\dots\mu_p}$
3. Antisymmetrize: $(p+1)\partial_{[\mu_{p+1}}\Omega_{\mu_1\mu_2\dots\mu_p]}$.
The last completely antisymmetric covariant vector defines the exterior derivative denoted $d\Omega$. This object is coordinate independent. The following formal properties of the exterior derivative can easily be checked:
- The exterior derivative of a zero-form is the ordinary partial derivative $(d\Phi)_\mu=\partial_\mu \Phi$ where we have highlighted the tensor index of the form $d\Phi$.
- Linearity: $d(\alpha\Omega_1+\beta\Omega_2)=\alpha d\Omega_1+\beta d\Omega_2$
- Leibniz rule: $d(\Omega_1\wedge\Omega_2)=d\Omega_1\wedge\Omega_2+(-1)^{p}\Omega_1\wedge d\Omega_2$ where $\Omega_1$ is a $p$-form.
- $d^2\Omega=d(d\Omega)\equiv0$ for all $p$-forms $\Omega$ and all $p$.
The factor of $(-1)^p$ in the Leibniz rule is there to compensate for the commutation rule for forms. The last property is nothing but a restatement of the commutativity of partial derivatives. The exterior derivative of an $N$-form is automatically zero since there are no forms with degree $N+1$.
Integration of forms
--------------------
Forms are precisely those elementary mathematical objects which appears under integral signs. A one-form $A$ can be integrated along a one-domensional curve on the manifold, a two form $F$ over a two-dimensional surface, and a $p$-form over a $p$-dimensional sub-manifold.
If a coordinate system $x^\mu$ is given, we can consider the normals to the equipotential surfaces $x^\mu=const$ for $\mu=1,2,\dots,N$. This provides us with $N$ normals for each point on the manifold. These normals are exterior derivatives (i.e. gradients/partial derivatives) of the coordinate zero-forms $x^1,x^2,\dots$ and are therefore examples of one-forms and we write them $dx^1,dx^2,\dots,dx^N$. These one-forms collectively written as $dx^\mu$ are a set of co-vectors that span space of one forms. Thus we can expand a one-form in terms of its coordinate coefficitents $A_\mu$ as $A=A_\mu dx^\mu$. Similarly, the objects $dx^\mu\wedge dx^\nu$ are two-forms and they span the space of two forms. A two-form can then be expanded in terms of its coordinate coefficients $F_{\mu\nu}$ as $F=\frac{1}{2}F_{\mu\nu}dx^\mu\wedge dx^\nu$. More generally, any $p$-form $\Omega$ can be expanded in the coordinate one-form basis as follows $$\begin{aligned}
\Omega=\frac{1}{p!}\Omega_{\mu_1\dots\mu_p}dx^{\mu_1}\wedge dx^{\mu_2}\wedge\dots dx^{\mu_p}.\end{aligned}$$ If $p=N$ the object $dx^{\mu_1}\wedge dx^{\mu_2}\wedge\dots dx^{\mu_p}$ is proportional to the Levi-Civita symbol, i.e. $$\begin{aligned}
dx^{\mu_1}\wedge dx^{\mu_2}\wedge\dots dx^{\mu_p}=\epsilon^{\mu_1\dots\mu_N}dx^1\wedge dx^2\dots \wedge dx^N.\end{aligned}$$ We can now make connection to the standard way of performing a multi-variable integration $\int \phi d^Nx$ by making the identification $d^Nx=\pm dx^1\wedge dx^2\dots \wedge dx^N$ where the sign is determined by the choice of orientation of the manifold. Thus, the integration over an $N$-form $\Omega$ over some region $V$ on the manifold is understood as $$\begin{aligned}
\int_V \Omega&=&\int_V \frac{1}{N!}\Omega_{\mu_1\dots\mu_N}dx^{\mu_1}\wedge dx^{\mu_2}\wedge\dots dx^{\mu_N}=\int_V \frac{1}{N!}\Omega_{\mu_1\dots\mu_N}\epsilon^{\mu_1\dots\mu_N}dx^1\wedge dx^2\dots \wedge dx^N\nonumber\\
&=&\pm\int_V \frac{1}{N!}\Omega_{\mu_1\dots\mu_N}\epsilon^{\mu_1\dots\mu_N}d^Nx\nonumber\end{aligned}$$ where the scalar density $\frac{1}{N!}\Omega_{\mu_1\dots\mu_N}\epsilon^{\mu_1\dots\mu_N}$ is now integrated using the methods of standard textbook multivariable calculus.
Next consider the integration of a form $\Omega$ of degree $p<N$ over some submanifold $S$. First we need to provide a parametrization of the surface, i.e. $x^\mu(\lambda_1,\lambda_2,\dots,\lambda_p)$ where $\lambda^i\in \mathbb{R}^p$, $i=1,\dots,p$. This defines the forms $dx^\mu=\frac{\partial x^\mu}{\partial \lambda^i}d\lambda^i$ on $\mathbb{R}^p$. The integration of the form $\Omega$ over the surface $S$ is now understood as $$\begin{aligned}
\int_S \Omega&=&\int_S \frac{1}{p!}\Omega_{\mu_1\dots\mu_p}dx^{\mu_1}\wedge dx^{\mu_2}\wedge\dots dx^{\mu_p}=\int_S\frac{1}{p!}\Omega_{\mu_1\dots\mu_p} \frac{\partial x^{\mu_1}}{\partial \lambda^{i_1}}\frac{\partial x^{\mu_2}}{\partial \lambda^{i_2}}\dots\frac{\partial x^{\mu_p}}{\partial \lambda^{i_p}}d\lambda^{i_1}\wedge d\lambda^{i_2}\wedge\dots d\lambda^{i_p}\nonumber\\
&=&\int_S\frac{1}{p!}\Omega_{\mu_1\dots\mu_p} \frac{\partial x^{\mu_1}}{\partial \lambda^{i_1}}\frac{\partial x^{\mu_2}}{\partial \lambda^{i_2}}\dots\frac{\partial x^{\mu_p}}{\partial \lambda^{i_p}}\epsilon^{i_1i_2\dots i_p}d\lambda^1\wedge d\lambda^2\wedge\dots d\lambda^p \nonumber\\
&=&\pm\int_S\frac{1}{p!}\Omega_{\mu_1\dots\mu_p} \frac{\partial x^{\mu_1}}{\partial \lambda^{i_1}}\frac{\partial x^{\mu_2}}{\partial \lambda^{i_2}}\dots\frac{\partial x^{\mu_p}}{\partial \lambda^{i_p}}\epsilon^{i_1i_2\dots i_p}d^p\lambda\nonumber\end{aligned}$$ where the last sign again is determined by the choice of orientation of the surface $S$. The scalar density $\frac{1}{p!}\Omega_{\mu_1\dots\mu_p} \frac{\partial x^{\mu_1}}{\partial \lambda^{i_1}}\frac{\partial x^{\mu_2}}{\partial \lambda^{i_2}}\dots\frac{\partial x^{\mu_p}}{\partial \lambda^{i_p}}\epsilon^{i_1i_2\dots i_p}$ is defined on some open set of $\mathbb{R}^p$ and is integrated using the methods of standard textbook multivariable calculus.
Lastly we consider the theorems the go by the names Green’s, Stokes’, and Gauss’ theorem. They are all special cases of one beautiful theorem in exterior calculus stating that $$\begin{aligned}
\int_S d\Omega=\int_{\partial S} \Omega\end{aligned}$$ where $\Omega$ is some $p$-form, $S$ some $p$-dimensional submanifold, and $\partial S$ the $p-1$-dimensional boundary of $S$.
Gauge connections, curvature, and Bianchi identities
----------------------------------------------------
We provide here a brief exposition of the basic techniques and ideas of gauge connections in the language of forms. Although the formulas of this section is valid for any gauge group we will mostly use the waywiser variables to illustrate the ideas.
The contact vector $V^A$ appears with a gauge index $A$ and transforms under a spacetime-dependent gauge transformation as $V^A\rightarrow \theta(x)^A_{\ B}V^B$. Objects with gauge index downstairs, e.g. $U_A$, transforms as $U_A\rightarrow U_B(\theta^{-1})^B_{\ A}$ so that $U_AV^A$ is invariant under arbitrary gauge transformations. This fixes the tranformation law of mixed objects $W^A_{\ B}$ as $W^A_{\ B}\rightarrow \theta^A_{\ C}W^C_{\ D}(\theta^{-1})^D_{\ B}$.
The exterior derivative of $dV^A$ transforms inhomogeneously $d(\theta^A_{\ B}V^B)\neq \theta^A_{\ B}dV^B$ and $dV^A$ under a spacetime-dependent gauge transformation $V^A\rightarrow \theta(x)^A_{\ B}V^B$. It is therefore not a gauge-covariant object. In order to restore gauge-covariance the exterior derivative is replaced by the gauge covariant exterior derivative $d\rightarrow D^{(A)}$: $$\begin{aligned}
D^{(A)}V^A\equiv dV^A+A^A_{\ B}V^B \qquad D^{(A)}U_A\equiv dU_A-A^B_{\ A}U_B \end{aligned}$$ with the minus sign on the right equation guaranteeing that $D(U_AV^A)=d(U_AV^A)$. The requirement of gauge-covariance, i.e. $D^{(A')}(\theta^A_{\ B}V^B)=\theta^A_{\ B}D^{(A)}V^B$, implies immediately that the connection $A^A_{\ B}$ transforms inhomogeneously under local gauge transformation: $$\begin{aligned}
A^A_{\ B}\rightarrow A^{\prime A}_{\ B}=-d\theta^A_{\ C}(\theta^{-1})^C_{\ B}+\theta^A_{\ C}A^C_{\ D}(\theta^{-1})^D_{\ B}.\end{aligned}$$ We will often write $D$ for the gauge-covariant instead of the more cumbersome notation $D^{(A)}$ wherever no confusion can arise. The gauge covariant exterior derivative of some $p$-form, $\Omega^A_{\ B}$ say, is given by $$\begin{aligned}
D\Omega^A_{\ B}=d\Omega^A_{\ B}+A^A_{\ C}\wedge \Omega^C_{\ B}-A^C_{\ B}\wedge \Omega^A_{\ C}\end{aligned}$$ and as is easily checked the curvature two-form $F^A_{\ B}$ defined by[^10] $$\begin{aligned}
\label{standef}
F^A_{\ B}\equiv DA^A_{\ B}\equiv dA^A_{\ B}+A^A_{\ C}\wedge A^C_{\ B}\end{aligned}$$ transforms as $F^A_{\ B}\rightarrow \theta^A_{\ C}F^C_{\ D}(\theta^{-1})^D_{\ B}$ and is therefore gauge covariant.
The identity $DF^A_{\ B}\equiv 0$ is extremely useful and is called the [*first Bianchi identity*]{}. It follows immediately from the definition of the gauge-covariant exterior derivative and the rules of exterior calculus: $$\begin{aligned}
DF^A_{\ B}&\equiv& D^2A^A_{\ B}\equiv dF^A_{\ B}+A^A_{\ C}\wedge F^C_{\ B}-A^C_{\ B}\wedge F^A_{\ C}\nonumber\\
&=&d(dA^A_{\ B}+A^A_{\ C}\wedge A^C_{\ B})+A^A_{\ C}\wedge (dA^C_{\ B}+A^C_{\ D}\wedge A^D_{\ B})-A^C_{\ B}\wedge (dA^A_{\ C}+A^A_{\ D}\wedge A^D_{\ C})\nonumber\\
&=&dA^A_{\ C}\wedge A^C_{\ B}-A^A_{\ C}\wedge dA^C_{\ B}+A^A_{\ C}\wedge dA^C_{\ B}+A^A_{\ C}\wedge A^C_{\ D}\wedge A^D_{\ B}-A^C_{\ B}\wedge dA^A_{\ C}-A^C_{\ B}\wedge A^A_{\ D}\wedge A^D_{\ C}\nonumber\\
&=&dA^A_{\ C}\wedge A^C_{\ B}+A^A_{\ C}\wedge A^C_{\ D}\wedge A^D_{\ B}-dA^A_{\ C}\wedge A^C_{\ B} -A^A_{\ D}\wedge A^D_{\ C}\wedge A^C_{\ B}\equiv0\nonumber\end{aligned}$$ By taking the gauge-covariant derivative of the torsion tensor defined by $T^A\equiv F^A_{\ B}V^B$ and making use of the Leibniz rule and the first Bianchi identity $DF^A_{\ B}\equiv0$ we obtain the [*second Bianchi identity*]{} $$\begin{aligned}
DT^A\equiv D(F^A_{\ B}V^B)=F^A_{\ B}\wedge DV^B\end{aligned}$$
The Palatini action in the language of forms {#tensorformtrans}
============================================
Consider the Palatini action $$\begin{aligned}
{\cal S}_P=\int \epsilon_{IJKL}e^I\wedge e^J\wedge R^{KL} \end{aligned}$$ where $e^I=e^I_\mu dx^\mu$ is the co-tetrad one form and $R^{KL}=\frac{1}{2}R_{\mu\nu}^{\ \ KL}dx^\mu\wedge dx^\nu$ is the Riemann curvature two-form defined by $R^{IJ}=d\omega^{IJ}+\omega^I_{\ K}\wedge \omega^{KJ}$ with $\omega^{IJ}$ a $\mathfrak{so}(1,3)$-valued spin connection one-form. In order to see that the action ${\cal S}_P$ is nothing but (twice) the Einstein-Hilbert action written in the variables $e^I$ and $\omega^{IJ}$ we do the following rewriting $$\begin{aligned}
{\cal S}_P&=&\int \epsilon_{IJKL}e^I\wedge e^J\wedge R^{KL}=\int \frac{1}{2}\epsilon_{IJKL}e_\mu^I e_\nu^J R_{\rho\sigma}^{\ \ KL}dx^\mu\wedge dx^\nu\wedge dx^\rho\wedge dx^\sigma\nonumber\\
&=&\pm\int\frac{1}{2}\epsilon_{IJKL}e_\mu^I e_\nu^J R_{\rho\sigma}^{\ \ KL}\epsilon^{\mu\nu\rho\sigma}d^4x=\pm\int \frac{1}{2}\epsilon_{IJMN}e_\mu^I e_\nu^J e^M_\kappa e^N_\tau e^\kappa_Ke^\tau_L R_{\rho\sigma}^{\ \ KL}\epsilon^{\mu\nu\rho\sigma}d^4x\nonumber\\
&=&\pm\int\frac{1}{2}\epsilon_{IJKL}e_\mu^I e_\nu^J R_{\rho\sigma}^{\ \ KL}\epsilon^{\mu\nu\rho\sigma}d^4x=\pm\int \frac{1}{2}e\epsilon_{\mu\nu\kappa\tau}\epsilon^{\mu\nu\rho\sigma}e^\kappa_Ke^\tau_L R_{\rho\sigma}^{\ \ KL}d^4x\nonumber\\
&=&\pm\int \frac{1}{2}e2(\delta^\rho_\tau\delta^\sigma_\kappa-\delta^\rho_\kappa\delta^\sigma_\tau)e^\kappa_Ke^\tau_L R_{\rho\sigma}^{\ \ KL}d^4x=\pm\int 2e e^\mu_Ie^\nu_J R_{\mu\nu}^{\ \ IJ}d^4x=\pm\int 2\sqrt{-g} Rd^4x\nonumber\end{aligned}$$ where we made use of the identities $$\begin{aligned}
\sqrt{-g}=e\qquad R=e^\mu_Ie^\nu_J R_{\mu\nu}^{\ \ IJ}\qquad\epsilon_{\mu\nu\kappa\tau}\epsilon^{\mu\nu\rho\sigma}=2(\delta^\rho_\kappa\delta^\sigma_\tau-\delta^\rho_\tau\delta^\sigma_\kappa)\qquad e^\mu_I e_\mu^J=\delta^J_I\qquad e\epsilon_{\mu\nu\rho\sigma}=\epsilon_{IJKL}e^I_\mu e^J_\nu e^K_\rho e^L_\sigma\nonumber\end{aligned}$$ with $e$ the co-tetrad determinant and $e^\mu_I$ its inverse.
The variational calculus of differential forms {#variationalforms}
==============================================
A spacetime action ${\cal S}$ is per definition an integral ${\cal S}=\int\mathcal{L}$ of some four-form ${\cal L}$ over some spacetime region $V$. Since all the basic variables in Cartan waywiser geometry are themselves differential forms, and the equations of motions are obtained by requiring the action to be extremized, we provide, for completeness and accessibility, an exposition of the variational calculus of differential forms and related helpful tricks which simplify calculations immensely. For the sake of simplicity, our Lagrangian four-forms $\mathcal{L}$ will be assumed to be polynomial in the basic forms.
The variation of a p-form $\Omega$ is as usual defined as $\Omega\rightarrow \Omega+\delta\Omega$. The variation symbol $\delta$ commutes with the exterior derivative $\delta d\Omega=d\delta\Omega$ which follows immediately from the linear property of the exterior derivative: $\delta d\Omega\equiv d(\Omega+\delta\Omega)-d\Omega =d\Omega+d\delta\Omega-d\Omega=d\delta\Omega$.
Let us now consider some action ${\cal S}=\int_V \mathcal{L}$ where $\mathcal{L}$ is a four-form that for concreteness depends on some form $\Omega$ and it’s first exterior derivative $d\Omega$, i.e. $\mathcal{L}=\mathcal{L}(\Omega,d\Omega)$. In order to obtain the equations of motion for $\Omega$ we wish to vary the action with respect to the differential form $\Omega$. The variation $\delta_\Omega {\cal S}$ is defined by $$\begin{aligned}
\delta_\Omega {\cal S}=\int_V \delta_\Omega \mathcal{L}(\Omega,d\Omega)\equiv \int_V \mathcal{L}(\Omega+\delta\Omega,d\Omega+d\delta\Omega)-\mathcal{L}(\Omega,d\Omega)=\int_V\mathcal{L}(\delta\Omega,d\Omega)+\mathcal{L}(\Omega,d\delta\Omega)\end{aligned}$$ In order to extract equations of motion we as usual integrate by parts which we now turn to.
Integration by parts
--------------------
After a variation of a Lagrangian four-form $\mathcal{L}$ with respect to a form $\Omega$ we might end up with terms like $d(\delta_\Omega\omega)$ where $\omega$ is some three-form. If we now assume that the variation of $\Omega$ is zero at the boundary $\partial V$, i.e. $\delta\Omega|_{\partial V}=0$, we also have that $\delta_\Omega\omega|_{\partial V}=0$. Gauss theorem then yields $$\begin{aligned}
\int_V \delta_\Omega d(\omega)=\int_V d(\delta_\Omega\omega)=\int_{\partial V} \delta_\Omega\omega=0\end{aligned}$$ and we conclude that terms like in a Lagrangian which are a exterior derivatives of a three-forms, e.g. $d\omega$ above, do not alter the equations of motion. These are also called topological terms.
Suppose now that we have obtained $$\begin{aligned}
\int_V \delta\Omega\wedge \Psi + d\delta\Omega\wedge\Phi\end{aligned}$$ after a variation with respect to $\Omega$. By making use of the Leibnitz rule for exterior derivatives $$\begin{aligned}
d(\delta\Omega\wedge\Phi)=d\delta\Omega\wedge\Phi+(-1)^p\delta\Omega\wedge d\Phi\end{aligned}$$ we see that we can simplify the above variation using Gauss theorem and the fact that the variation $\delta\Omega$ vanishes at the boundary $$\begin{aligned}
\int_V \delta\Omega\wedge \Psi + d\delta\Omega\wedge\Phi&=\int_V \delta\Omega\wedge \Psi + d(\delta\Omega\wedge\Phi)-(-1)^p\delta\Omega\wedge d\Phi=\int_V \delta\Omega\wedge \Psi-(-1)^p\delta\Omega\wedge d\Phi + \underbrace{\int_{\partial V} \delta\Omega\wedge\Phi}_{=0}\nonumber\\&=\int_V \delta\Omega\wedge(\Psi-(-1)^p d\Phi)\nonumber\end{aligned}$$ If the action is supposed to extremized its variation must be zero for all choices of $\delta\Omega$. This means that $$\begin{aligned}
\Psi-(-1)^p d\Phi=0\end{aligned}$$ which then constitute the equations of motion.
Methods using the gauge covariant exterior derivative
-----------------------------------------------------
We can now extend the above discussion to include gauge covariant exterior derivatives $D$. Strictly speaking there is no need to do this but it simplifies calculations immensely and keeps the expressions manifestly gauge covariant throughout the calculation.
For concreteness we use the waywiser forms and their gauge-covariant derivatives to illustrate the computational techniques involved. As in the case of the exterior derivative, we infer from linearity that the variation symbol $\delta$ commutes with the gauge covariant exterior derivative $D$. For example, in the case of the curvature two-form we have $$\begin{aligned}
\delta_A F^{AB}=\delta_A DA^{AB}\equiv D(A^{AB}+\delta A^{AB})-DA^{AB}=DA^{AB}+D\delta A^{AB}-DA^{AB}=D\delta A^{AB}\end{aligned}$$ Because the gauge covariant exterior derivative satisfies the Leibnitz rule, e.g. $$\begin{aligned}
D(\Phi^{ABC\dots}\wedge\Psi^{DEF\dots})=D\Phi^{ABC\dots}\wedge\Psi^{DEF\dots}+(-1)^p \Phi^{ABC\dots}\wedge D\Psi^{DEF\dots}\end{aligned}$$ where $\Phi^{ABC\dots}$ is some Lie-algebra-valued p-form, and the gauge covariant exterior derivative reduces to the ordinary exterior derivative for a form with no free gauge indices, e.g. $$\begin{aligned}
D\Phi^{A}_{\ A}=d\Phi^{A}_{\ A}\end{aligned}$$ we can make use of the same tricks as above to vary a Lagrangian four-form which per definition contains no free gauge indices. See Appendix \[MMaction\] for a concrete example.
Topological terms {#topologicalterms}
-----------------
When writing down actions is it important to quickly be able to recognize topological terms since the do not alter the equations of motion. Let $A^{AB}$ and $\omega^{IJ}$ be two connections with $D^{(A)}$ and $D^{(\omega)}$ the corresponding gauge covariant derivatives. The curvature two-forms are then given by $F^{AB}=D^{(A)}A^{AB}$ and $R^{IJ}=D^{(\omega)}\omega^{IJ}$. Two examples of topological terms (i.e. exterior derivatives of three-forms) are then $$\begin{aligned}
d(A^{AB}\wedge F_{AB})&=D^{(A)}(A^{AB}\wedge F_{AB})=F^{AB}\wedge F_{AB} \label{top1}\\
d(\epsilon_{IJKL}\omega^{IJ}\wedge R^{KL})&=D^{(\omega)}(\epsilon_{IJKL}\omega^{IJ}\wedge R^{KL})=\epsilon_{IJKL} R^{IJ}\wedge R^{KL} \label{top2}.\end{aligned}$$ Another topological term that includes the contact vector is known as the Nieh-Yan term. We can derive it from the three-form $$\begin{aligned}
T^A\wedge DV_A\equiv F^{AB}\wedge DV_AV_B.\end{aligned}$$ by taking its exterior derivative (which is amounts to taking the divergence of its dual) $$\begin{aligned}
\label{nyny}
d(F^{AB}\wedge DV_AV_B)&=&D(F^{AB}\wedge DV_AV_B)=F^{AB}\wedge F_{AC}V^CV_B-F^{AB}\wedge DV_A\wedge DV_B\nonumber\\
&=&T^A\wedge T_A-F^{AB}\wedge DV_A\wedge DV_B\nonumber\end{aligned}$$ where we have used the identities $DF^{AB}\equiv0$ and $D^2V^A=F^A_{\ B}V^B$. Adding the Nieh-Yan term to the Palatini action will not change the equations of motion since it is the exterior derivative of a three-form (or equivalently the divergence of its dual vector density). However, the terms $F^{AB}\wedge DV_A\wedge DV_B$ and $T^A\wedge T_A$ are not topological when taken separately since they are not the exterior derivative of some three-form. However, since their difference is the Nieh-Yan topological term we obtain the same equations of motion if we add either the first or the second one.
The second term is called the Holst term and as we have just stressed not topological and will therefore yield different equations of motion than the MacDowell-Mansouri action. However, since the Holst term differs from the term $T^A\wedge T_A$ only by the topological Nieh-Yan term, we can add $T^A\wedge T_A$ instead. We can now hope that for vanishing spin-density (which induces torsion) we reproduce General Relativity. Indeed, this is the case as can be verified from the equations of motion.
Example: MacDowell-Mansouri action {#MMaction}
----------------------------------
As a concrete example of the calculus of variations for forms we consider the Mansouri-MacDowell action with all the essential calculational steps included. The Bianchi identity $DF^{AB}\equiv0$ simplifies the calculations enormously. As explained in section \[nondynapproach\], the normalized and spacelike contact vector $V^A$ is not a dynamical field in the MacDowell-Mansouri action and no variation with respect to it is required. Thus we only consider the variation with respect to $A^{AB}$. Here is the variation of the MacDowell-Mansouri action in pedagogical detail: $$\begin{aligned}
\delta_A {\cal S}_P&=\int_V \delta_A(\epsilon_{ABCDE}V^E F^{AB}\wedge F^{CD})=\int_V \delta_A(\epsilon_{ABCDE}V^E DA^{AB}\wedge DA^{CD})\nonumber\\
&=\int_V \epsilon_{ABCDE}V^E (D\delta A^{AB}\wedge DA^{CD}+DA^{AB}\wedge D\delta A^{CD})=2\int_V \epsilon_{ABCDE}V^E D\delta A^{AB}\wedge F^{CD}\nonumber\\
&=2\int_V D(\epsilon_{ABCDE}V^E \delta A^{AB}\wedge F^{CD})+\delta A^{AB}\wedge D(\epsilon_{ABCDE}V^E F^{CD})\nonumber\\
&=2\int_V d(\epsilon_{ABCDE}V^E \delta A^{AB}\wedge F^{CD})+\delta A^{AB}\wedge \epsilon_{ABCDE} (DV^E\wedge F^{CD}+V^E \underbrace{DF^{CD}}_{\equiv0})\nonumber\\
&=2\underbrace{\int_{\partial V} \epsilon_{ABCDE}V^E \delta A^{AB}\wedge F^{CD}}_{=0}+\int_V\delta A^{AB}\wedge \epsilon_{ABCDE} DV^E\wedge F^{CD}\nonumber\\
&=2\int_V\delta A^{AB}\wedge (\epsilon_{ABCDE} DV^E\wedge F^{CD})\nonumber\\\end{aligned}$$ from which the equations of motions, which naturally appear as a set of three-forms, are readily identified as $$\begin{aligned}
\label{MMwaywiserfieldeq}
\epsilon_{ABCDE} DV^E\wedge F^{CD}=0.\end{aligned}$$ This equation is nothing but the Palatini-Einstein field equations with positive cosmological constant and zero-torsion condition but written in a compact way; something which the contact vector $V^A$ allows for.
Einstein equations in standard form {#standardform}
===================================
Although the field equations (\[MMwaywiserfieldeq\]) written in waywiser variables are simple and elegant, it is instructive to rewrite it so that they take on the standard more complicated form which we recognize from text books. If we impose the gauge choice $V^A=\ell \delta^A_4$ we have $$\begin{aligned}
e^I=DV^I\qquad F^{I4}=\mp T^I\qquad F^{IJ}=R^{IJ}\pm\frac{1}{\ell^2}e^I\wedge e^J \end{aligned}$$ with the sign as prescribed in section \[antiwheel\].
Einstein field equations
------------------------
If we set $A=4$ $B=I$ we get $$\begin{aligned}
0&=&\epsilon_{4ICDE} DV^E\wedge F^{CD}=\epsilon_{IJKL} e^L\wedge (R^{JK}\pm\frac{1}{\ell^2}e^J\wedge e^K)\nonumber\\
&=&\epsilon_{IJKL} (e_\mu^L dx^\mu)\wedge\left((\frac{1}{2}R_{\nu\rho}^{\ \ JK}dx^\nu\wedge dx^\rho)\pm\frac{1}{\ell^2}(e_\nu^Jdx^\nu)\wedge(e_\rho^K dx^\rho)\right)\nonumber\\
&=&\epsilon_{IJKL} e_\mu^L (\frac{1}{2}R_{\nu\rho}^{\ \ JK}\pm\frac{1}{\ell^2}e_\nu^Je_\rho^K)dx^\mu\wedge dx^\nu\wedge dx^\rho\end{aligned}$$ From this three-form we can construct the dual vector density $\epsilon_{IJKL} e_\mu^L (\frac{1}{2}R_{\nu\rho}^{\ \ JK}-\frac{1}{\ell^2}e_\nu^J e_\rho^K)\epsilon^{\mu\nu\rho\sigma}$ which after some rewriting is identified as the standard Einstein field equations: $$\begin{aligned}
0&=\epsilon^{\mu\nu\rho\sigma}\epsilon_{IJKL} e_\nu^L(\frac{1}{2}R_{\rho\sigma}^{\ \ JK}\pm\frac{1}{\ell^2}e_\rho^J e_\sigma^K)=\epsilon^{\mu\nu\rho\sigma}ee^\alpha_Ie_J^\beta e_K^\gamma e_L^\delta\epsilon_{\alpha\beta\gamma\delta} e_\nu^L(\frac{1}{2}R_{\rho\sigma}^{\ \ JK}\pm\frac{1}{\ell^2}e_\rho^J e_\sigma^K)\nonumber\\
&=e\epsilon^{\mu\nu\rho\sigma}e^\alpha_Ie_J^\beta e_K^\gamma \epsilon_{\alpha\beta\gamma\delta}\delta^\delta_\nu(\frac{1}{2}R_{\rho\sigma}^{\ \ JK}\pm\frac{1}{\ell^2}e_\rho^J e_\sigma^K)=e\epsilon^{\mu\rho\sigma\nu}\epsilon_{\alpha\beta\gamma\nu}e^\alpha_Ie_J^\beta e_K^\gamma(\frac{1}{2}R_{\rho\sigma}^{\ \ JK}\pm\frac{1}{\ell^2}e_\rho^J e_\sigma^K)\nonumber\\
&=e(\delta^\mu_\alpha\delta^\rho_\beta\delta^\sigma_\gamma+\delta^\mu_\gamma\delta^\rho_\alpha\delta^\sigma_\beta+
\delta^\mu_\beta\delta^\rho_\gamma\delta^\sigma_\alpha-\delta^\mu_\beta\delta^\rho_\alpha\delta^\sigma_\gamma-
\delta^\mu_\gamma\delta^\rho_\beta\delta^\sigma_\alpha-\delta^\mu_\alpha\delta^\rho_\gamma\delta^\sigma_\beta)e^\alpha_Ie_J^\beta e_K^\gamma(\frac{1}{2}R_{\rho\sigma}^{\ \ JK}\pm\frac{1}{\ell^2}e_\rho^J e_\sigma^K)\nonumber\\
&=-2e(R_I^{\ \mu}-\frac{1}{2}e^\mu_IR)\pm\frac{e}{\ell^2}(12e^\mu_I-3e^\mu_I+3e^\mu_I)=-2e(\frac{1}{2}R_I^{\ \mu}-\frac{1}{2}e^\mu_IR\mp\frac{6}{\ell^2}e^\mu_I)\nonumber\end{aligned}$$ where $R\equiv R_{\mu\nu}^{\ \ IJ}e^\mu_Ie^\nu_J$ and $R_\mu^{\ I}\equiv R_{\mu\nu}^{\ \ IJ}e^\nu_J$. We can finally rewrite the equation as $$\begin{aligned}
R_\mu^{\ \nu}-\frac{1}{2}\delta_\mu^\nu R\mp\frac{6}{\ell^2}\delta_\mu^\nu=0\end{aligned}$$ which is nothing but Einstein field equations with a positive cosmological constant $\Lambda=\mp\frac{6}{\ell^2}$.
Vanishing torsion
-----------------
To demonstrate that the torsion tensor vanishes we set $A=I$ and $B=J$ which yields: $$\begin{aligned}
0=\epsilon_{IJ4KL} e^L\wedge F^{4K}=\frac{1}{\ell}\epsilon_{IJKL} e^L\wedge T^K=\frac{1}{2\ell}\epsilon_{IJKL} e_\mu^L T_{\nu\rho}^Kdx^\mu\wedge dx^\nu\wedge dx^\rho\end{aligned}$$ To see what this means in tensor language we rewrite the three-form as a dual vector density $\epsilon_{IJKL} e_\mu^L T_{\nu\rho}^K\epsilon^{\mu\nu\rho\sigma}$. The steps are similar to the rewriting of the Einstein field equations and we do not display calculation in detail. The result is: $$\begin{aligned}
\label{torsiontensoreq}
\epsilon_{IJKL} e_\mu^L T_{\nu\rho}^K\epsilon^{\mu\nu\rho\sigma}=-e(T^\sigma_{IJ}+e^\sigma_I T_{J\mu}^\mu-e^\sigma_J T_{I\mu}^\mu)=0.\end{aligned}$$ Contracting this equation with $e_\sigma^J$ yields $T_{I\mu}^\mu=0$ which when inerted back into yields $T^\sigma_{IJ}=0$. Thus, the equations of motion imposes zero torsion which shows that the MacDowell-Mansouri action is equvalent to the Einstein-Hilbert action.
Bibliography
============
[^1]: `hwestman@physics.usyd.edu.au`
[^2]: `tzlosnik@perimeterinstitute.ca`
[^3]: We note that Harvey Brown’s book “Physical relativity" [@BrownPhysicalRelativity] on the foundations on special relativity, from our perspective quite appropriately, depicts a traditional waywiser on its cover!
[^4]: By the term ‘$\mathfrak{so}(3)$-valued’ is meant that the connection one-form $A_{a\ j}^{\ i}$, seen as a matrix $(A_a)^i_{\ j}$, is a linear combination $(A_a)^i_{\ j}=A_a^{\alpha}(S_\alpha)^i_{\ j}$ of matrices $(S_\alpha)^i_{\ j}$ which satisfies the commutation relations $[S_\alpha,S_\beta]=2i\epsilon_{\alpha\beta}^{\phantom{\alpha\beta}\gamma}S_\gamma$ of the Lie-algebra $\mathfrak{so}(3)$.
[^5]: The minus sign in front of the connection is of course pure convention.
[^6]: We contrast our approach to Poincarè gauge theory [@Hehl:1994ue] in which the co-tetrad is conceptualized as a gauge connection with respect to local translations.
[^7]: A more accurate term is [*transvections*]{} [@Randono:2010cq].
[^8]: Non-polynomial actions for General Relativity based on gauge connections can be considered [@Krasnov:2011pp; @Krasnov:2012pd] but we shall restrict attention to polynomial actions.
[^9]: The numerically invariant Levi-Civita tensor densities $\epsilon_{\mu\nu\rho\sigma}$ and $\epsilon^{\mu\nu\rho\sigma}$ are implicitly already used in the exterior calculus.
[^10]: It would perhaps seem more proper to define the curvature two-form as $F^A_{\ B}\equiv DA^A_{\ B}\equiv dA^A_{\ B}+A^A_{\ C}\wedge A^C_{\ B}-A^C_{\ B}\wedge A^A_{\ C}$ since the connection one-form $A^A_{\ B}$ has one index up and one down. But since $dA^A_{\ B}+A^A_{\ C}\wedge A^C_{\ B}-A^C_{\ B}\wedge A^A_{\ C}=dA^A_{\ B}+2A^A_{\ C}\wedge A^C_{\ B}$ we can simply rescale $A^A_{\ B}\rightarrow \frac{1}{2}A^A_{\ B}$ to end up with the standard definition .
|
---
abstract: 'It is shown that the north-south teleconnections in the northern hemisphere: North Atlantic (NAO), East Pacific (EPO), Western Pacific (WPO) dipole oscillations and Pacific/North American quadrupole pattern (PNA), are dominated by the Hamiltonian distributed chaos on the daily to intraseasonal time scales. Differences of the chaotic properties of the dipole and quadrupole oscillations as well as their relation to the surface air temperature have been briefly discussed. A chaotic spectral affinity of the PNA quadrupole pattern to the Arctic Oscillation and the Greenland blocking phenomenon have been considered in this context.'
author:
- 'A. Bershadskii'
title: 'Hamiltonian distributed chaos in the north-south dipole and quadrupole teleconnections'
---
Introduction
============
The four major north-south teleconnections: North Atlantic (NAO), East Pacific (EPO), Western Pacific (WPO) oscillations and Pacific/North American pattern (PNA) reflect planetary-scale recurring patterns (atmospheric oscillations) of circulation and pressure anomalies over the Atlantic and Pacific oceans in the Northern hemisphere (see, for instance, Refs.[@bl]-[@cfl] and references therein). While the NAO, EPO, and WPO are north-south dipoles of anomalies spanning mainly over corresponding oceans the Pacific/North American pattern (PNA) is a quadrupole including also the intermountain region of North America (i.e. land together with ocean) and, therefore, is a special case.
The decadal, annual and seasonal large-scale atmospheric oscillations over Atlantic and Pacific oceans attracted major attention of researchers whereas the statistical properties of their fluctuations on daily to intraseasonal time scales are much less studied. Meanwhile, just on these time scales the surface temperature dynamics, very important from meteorological point of view, is dominated by Hamiltonian distributed chaos [@b1].\
The exponential frequency spectrum $$E(f) \propto \exp-(f/f_c) \eqno{(1)}$$ is often observed in the systems with deterministic chaos [@fm]-[@b2]. A more complex stretched exponential spectrum is observed for the Hamiltonian chaotic systems $$E(f ) \propto \int_0^{\infty} P(f_c) \exp -(f/f_c)~ df_c \propto \exp-(f/f_0)^{\beta} \eqno{(2)}$$ with a distribution $P(f_c)$ of the characteristic frequency $f_c$, and the $\beta = 1/2$ or $3/4$ depending on the boundary conditions [@b1].
Naturally, most of the models in the climate theory are Hamiltonian dynamical systems (see Refs. [@gl]-[@gl2] and references therein). The NAO, EPO, WPO and PNA indices represent the four major patterns of the geopotential height variability over the Atlantic and Pacific oceans. One can expect that power spectra computed for these indices have the characteristic for the Hamiltonian distributed chaos form Eq. (2) with the $\beta =1/2$ or 3/4.
The North Atlantic oscillation
==============================
In the Northern Hemisphere much of the Atlantic ocean is covered by the North Atlantic Oscillation (NAO). The NAO index can be defined as the difference in normalized sea-level pressure anomalies between Southwest Iceland (a northern node) and Azores (a southern node). It is so-called station-based method of the NAO index computation (see, for instance, Refs. [@hur],[@j],[@gr] and references therein). There is also another method of the NAO index computation. This method uses gridded climate datasets with empirical orthogonal analysis - EOF (see, for instance, Refs. [@tw1],[@tw],[@fo] and references therein). There are also different modifications of the above mention methods and, therefore, there are different versions of the NAO index (this is also right for other climate indices). All these methods have their strong and weak sides.\
The positive phase of the NAO corresponds to [*below*]{} normal pressure and heights across the high latitudes of the North Atlantic and [*above*]{} normal pressure and heights over the central North Atlantic, the western Europe and eastern United States. The negative phase corresponds to an opposite pattern of pressure and height anomalies.
Strong influence of the North Atlantic oscillation was registered in the surface temperature dynamics from eastern North America to central Europe (and even to the north-central Siberia), and from Greenland and Scandinavia to the Middle East [@hu].\
The daily NAO index computed by the station-based method - the difference in normalized sea-level pressure anomalies between Southwest Iceland and Azores [@gr], will be analysed for 1878-2014yy period as an example of the first (station-based) method. All other examples belong to the second method with the NAO, EPO, WPO and PNA daily indices based on the dipole and quadrupole centers of action of 500mb height oscillating patterns [@nao]-[@pna] (see Ref. [@kal] for the details of the NCEP-NCAR R1 reanalysis). To emphasize large-scale properties of the teleconnections the height fields were spectrally truncated at the indices computation (see the Refs. [@nao]-[@pna] for more technical details). An additional information about the NAO and PNA indices and a different methodology of their computation can be found in the Refs. [@nao2] and [@pna2].\
Figure 1 shows power spectrum of the NAO daily index (station-based) computed for the period 1878-1932yy. The data for the computations were taken from the site [@zen]. The maximum entropy method with an optimal resolution [@oh] has been used for the computation. The straight line indicates correspondence to the Eq. (2) with $\beta = 1/2$. Figure 2 shows analogous spectrum for the period 1932-2014yy [@bz].\
Now let us turn to the second method [@nao]. The North Atlantic oscillation is a combination of the parts of the West Atlantic and East Atlantic patterns and represents a north-south dipole of anomalies: one center is located over Greenland and the other center (of opposite sign) covering the central latitudes of the North Atlantic between 35-45N (see Fig. 3). At the computations of the NAO index the area-weighted mean 500-hPa geopotential height fields of the region 55-70N;70W-10W is subtracted from 35-45N; 70W-10W area averaged region.
\
The daily time series for the NAO index is available at site Ref. [@nao] for the 1948-2018yy period. In order to remove the low-frequency annual and seasonal modes a subtracting of a wavelet regression from the daily time series was made using the simplest Haar wavelet [@ogd].
Figure 4 shows power spectrum computed for the Haar wavelet detrended daily time series of the NAO index for the 1948-2018yy period (the spectrum was computed by the maximum entropy method [@oh]). The straight line in the figure (the best fit) indicates the stretched exponential decay Eq. (2) with the $\beta =1/2$. The fundamental period $T_f \simeq 40$d (indicated by the vertical arrow in the Fig. 4) and $T_0=1/f_0 \simeq 159$d.
East and West Pacific Oscillations
==================================
A north-south dipole of anomalies, similar to the NAO but located in the eastern Pacific, is known as East Pacific Oscillation (EPO) [@bl]. Computations of this index are similar to those used for the NAO index computations but the area averaged region 55-65N;160W-125W was subtracted from the area averaged region 20-35N; 160W-125W in this case [@epo].
In the positive phase of the EPO pressures, heights and temperatures are higher to the south and lower to the north while the negative phase corresponds to an opposite pattern (cf. Fig. 5).
Figure 6 shows power spectrum computed for the Haar wavelet detrended daily time series of the EPO index for the 1948-2018yy period (the data were taken from the site Ref. [@epo]). The straight line in the figure (the best fit) indicates the stretched exponential decay Eq. (2) with the $\beta =1/2$. The fundamental (pumping) period $T_f \simeq 40$d and $T_0=1/f_0 \simeq 159$d. The EPO Hamiltonian chaos characteristics are similar to those of the NAO (cf. Figs. 6 and 4).\
Another north-south dipole of anomalies is based on the regions 50-70N; 140E-150W and 25-40N; 140E-150W and is known correspondingly as Western Pacific Oscillation (WPO) [@bl],[@kal],[@wpo] (cf. Figures 5 and 7).
In the positive phase of the WPO pressures, heights and temperatures are higher to the south and lower to the north while the negative phase corresponds to an opposite pattern (cf. Fig. 7).
Figure 8 shows power spectrum computed for the Haar wavelet detrended daily time series of the WPO index for the 1948-2018yy period (the data were taken from the site Ref. [@wpo]). The straight line in the figure (the best fit) indicates the stretched exponential decay Eq. (2) with the $\beta =1/2$. The fundamental (pumping) period $T_f \simeq 44$d and $T_0=1/f_0 \simeq 158$d.\
It should be noted that the $T_0 \simeq 159$d is practically the same for all the north-south dipole teleconnections computed by the second method, and is the same as for the wavelet detrended global temperature fluctuations (land) [@b1].
The Pacific/North American pattern
===================================
The Pacific/North American pattern (PNA) is a quadrupole: \[(15-25N, 180-140W)-(40-50N, 180-140W)+(45-60N, 125W-105W)-(25-35N, 90W-70W)\] (cf. Fig. 9) [@pna]. It is also different from the previous cases because it includes land (the intermountain region of North America) as one of its centers. This difference results in a difference in the spectrum as one can see in Fig. 10. Figure 10 shows power spectrum computed for the Haar wavelet detrended daily time series of the PNA index for the 1948-2018yy period (the data were taken from the site Ref. [@pna]). The straight line in the figure (the best fit) indicates the stretched exponential decay Eq. (2) with the $\beta =3/4$. The fundamental (pumping) period $T_f \simeq 44$d and $T_0=1/f_0 \simeq 33$d.\
Figure 11 shows (for comparison) power spectrum for the wavelet regression detrended daily time series corresponding to the surface air temperature fluctuations for the Salt Lake City (see the Ref. [@b1]). This geographical site belongs to the intermountain region of the North America, which is one of the poles of the PNA quadrupole - Fig. 9 (cf. Figs. 11 and 10: $\beta = 3/4$ for the both spectra).
PNA, Arctic Oscillation and Greenland blocking phenomenon
=========================================================
A chaotic spectral affinity of the PNA pattern to the Arctic oscillation is of especial interest. Figure 12 shows the power spectrum of the [*raw*]{} PNA index (i.e. [*without*]{} removing the annual and seasonal modes, daily time series were taken from the site Ref. [@pna]). Fig. 13 shows analogous power spectrum for the raw daily time series corresponding to the Arctic Oscillation index (daily time series were taken from the site Ref. [@ao]). One can see a very strong similarity between the spectra shown in the Fig. 12 and Fig. 13 in the appropriately choosen scales (corresponding to the Hamiltonian distributed chaos Eq. (2) with $\beta =3/4$ and $T_0=1/f_0 \simeq 41$d).\
Let us recall that the Arctic Oscillation (AO) is a primary annular mode of atmospheric circulation in the Northern Hemisphere [@tw] (see for a review Ref. [@w]). This pattern represents a pressure gradient between the polar and subpolar regions and it is characterized by strong circulating winds around the Arctic’s perimeter. When the AO index is in its positive phase, these perimeter winds constrain colder air to the polar region. When the index is in its negative phase the winds’ confinement is weaker and the colder air masses penetrate deeper into subpolar and the mid-latitudes regions.\
The Greenland blocking phenomenon (a quasi-stationary large-scale pattern in the atmospheric pressure field - Fig. 14) is also interesting in this context. Greenland is physiographically a part of the North America and it penetrates deeply into Arctic circle. It is known that the Greenland blocking is important for mid-latitude climate because it diverts the atmospheric polar jet streams southwards or northwards depending of its state (see recent Ref. [@han2] and references therein).\
The Greenland Blocking Index (GBI) is defined as a 500mb geopotential height area averaged \[60-80N, 280-340E\] - Figure 14 (the daily averaged NCEP/NCAR reanalysis was used for the index computations, see for more details Ref. [@gbi] and for a review Refs. [@han2],[@han]).\
Figure 15 shows power spectrum computed for the raw daily time series of the GBI index for the 1948-2015yy period (the data were taken from the site Ref. [@gbi]). The straight line in the figure (the best fit) indicates the stretched exponential decay Eq. (2) with the $\beta =3/4$ and $T_0=1/f_0 \simeq 41$d (cf. Figs. 15, 13 and 12).\
Finally, it should be noted that the fundamental period $T_f$ for the north-south teleconnections is between 40 and 44 days (the fundamental periods are indicated by the vertical arrows in the Figs. 4,6,8,10), and the $T_0=1/f_0 \simeq 41$d for the spectra corresponding to the raw time series (Figs. 12,13 and 15). For the Northern Hemisphere extratropics the atmospheric dynamics periods about 40 days are well known observationally (cf. Refs. [@b1],[@mag]-[@cun] and references therein).
Acknowledgement
===============
I acknowledge use of the data provided by the Zenodo database (CERN), the Climate Prediction Center and the Earth system research laboratory (NOAA, USA).
[99]{} A.G. Barnston and R.E. Livezey, Monthly Weather Review, [**115**]{}, 1083 (1987). E. Kalnay et al., Bull. Amer. Meteor. Soc., [**77**]{}, 437 (1996). J.W. Hurrell et al., in ”The North Atlantic Oscillation: Climatic Significance and Environmental Impact”, Geophysical Monograph [**134**]{}, p.1, American Geophysical Union (2003). W.Y. Chen and H. van den Dool, Monthly Weather Review, [**131**]{}, 2885 (2003). C. Franzke, S.B. Feldstein and S. Lee, Q. J. R. Meteorol. Soc. [**137**]{}, 329 (2011). A. Bershadskii, arXiv:1806.01750 (2018). U. Frisch and R. Morf, Phys. Rev., [**23**]{}, 2673 (1981). J. D. Farmer, Physica D, [**4**]{}, 366 (1982). N. Ohtomo, K. Tokiwano, Y. Tanaka et. al., J. Phys. Soc. Jpn. [**64**]{} 1104 (1995). D.E. Sigeti, Phys. Rev. E, [**52**]{}, 2443 (1995). A. Bershadskii, EPL, [**88**]{}, 60004 (2009). A. Gluhovsky, Nonlinear Processes in Geophysics, [**13**]{}, 125 (2006). T.G. Shepherd, Encyclopedia of Atmospheric Sciences, J. R. Holton et al., Eds., 929 (Academic Press, 2003). P.J. Morrison, Hamiltonian Fluid Mechanics, Encyclopedia of Mathematical Physics. [**2**]{}, 593 (Elsevier, Amsterdam, 2006). A. Gluhovsky, and K. Grady, Chaos, [**26**]{}, 023119 (2016). J.W. Hurrell, Science, [**269**]{}, 676 (1995). P.D. Jones, T. Jonsson and D. Wheeler, Int. J. Clim., [**17**]{}, 1433 (1997). T. Cropper et al., Geosci. Data J., [**2**]{}, 12 (2015). D.W.J. Thompson and J.M. Wallace, Res. Lett. Geophys., [**25**]{}, 1297 (1998). D.W.J. Thompson and J.M. Wallace, J. Clim., [**13**]{}, 1000 (2000). C.K Folland et al., J. Clim., [**22**]{}, 1082 (2009). https://www.esrl.noaa.gov/psd/data/timeseries/daily/NAO https://www.esrl.noaa.gov/psd/data/timeseries/daily/EPO https://www.esrl.noaa.gov/psd/data/timeseries/daily/WPO https://www.esrl.noaa.gov/psd/data/timeseries/daily/PNA http://www.cpc.ncep.noaa.gov/products/precip/CWlink /pna/nao.shtml http://www.cpc.ncep.noaa.gov/products/precip/CWlink /pna/pna.shtml T. Cropper et al., http://doi.org/10.5281/zenodo.9979 A. Bershadskii, http://doi.org/10.5281/zenodo.1323564 T. Ogden, Essential Wavelets for Statistical Applications and Data Analysis (Birkhauser, Basel, 1997). ftp://ftp.cpc.ncep.noaa.gov/cwlinks/ J.M Wallace, On the role of the Arctic and Antarctic oscillations in polar climate. ECMWF Seminar on Polar Meteorology (2006). Available at the site: https://www.ecmwf.int E. Hanna et al., Int. J. Climatol., doi:10.1002/joc.5516 (2018). https://www.esrl.noaa.gov/psd/data/timeseries/daily/GBI/ E. Hanna et al., Int. J. Climatol., [**33**]{}, 862 (2013). V. Magana, J. Geophys. Res., [**98**]{}, 10441 (1993). S.L. Marcus, M. Ghil, and J.O. Dickey, J. Atmos. Sci., [**51**]{}, 1431 (1994). A.W. Robertson, and C.R. Mechoso, Monthly Weather Review, [**131**]{}, 1566 (2003). Ch.A.C. Cunningham and I.F. De Albuquerque Cavalcanti, Int. J. Climatology, [**26**]{}, 1165 (2006).
|
---
abstract: 'A simple approximation which captures some non-perturbative aspects of the one electron Green function of strongly interacting Fermion systems is developed. It provides a way to go one step beyond the usual dilute limit since particle-particle as well as particle-hole scattering are treated on the same footing. Intermediate states are constrained to contain only one particle-hole excitation besides the incoming particle. The Faddeev equations resulting from an exact treatment of this three-body problem are investigated. In one dimension the method is able to show spin and charge decoupling, but does not reproduce the exact nature of power-law singularities.'
author:
- 'Theodore C. Hsu${}^{(1),(2)}$[@byline] and Benoît Douçot${}^{(2)}$[@dbyline]'
---
Six postscript figures are appended at the end. Each is preceeded by a line like —- Fig. x cut here —– and ends with “showpage”. \#1[[$\backslash$\#1]{}]{}
Effect of three-particle correlations\
in low dimensional Hubbard models
${}^{(1)}$AECL Research, Chalk River Laboratories, Chalk River, Ontario, Canada K0J 1J0
${}^{(2)}$Centre des Recherches sur les Très Basses Températures,\
BP 166, 38042 Grenoble, France
Introduction
============
Since the discovery of high temperature superconductors, much work from both experimental and theoretical sides has suggested that Fermi-liquid theory does not provide a good description of the remarkable normal state properties of these materials. In particular, Anderson [@ANDERSONA] has proposed that a better framework is to be found in some two dimensional generalization of the Luttinger liquid, which has been identified by Haldane [@HALDANE] as a low energy effective theory for many interacting-Fermion systems in one dimension. For example, recent measurements of the temperature dependence of the Hall angle in high $T_{c}$ cuprates have been interpreted along this line [@HALL]. However, it has not yet been possible to derive such a picture from a first principles calculation. A suggestive investigation of two-particle scattering in two dimensions in the presence of the Fermi sea has lead Anderson to claim that a finite phase shift and a singular interaction function $f_{k,k^{\prime}}$ exist even in the dilute limit [@ANDERSONB]. But if the same particle-particle ladder is used in a more conventional many-body calculation, it has been shown by Engelbrecht and Randeria [@RANDERIA] and Fukuyama [*et al.*]{} [@FUKUYAMA] that the imaginary part of the self-energy is not singular enough in this T-matrix approximation to avoid Fermi-liquid behavior. In fact, the two approaches may not contradict each other. Anderson suggests that an extra particle added to the system induces a shift of all the momenta of the particles already present in a similar way as the deep level in the X-ray edge problem. The present paper is an attempt to connect this physical picture to some microscopic perturbative calculations for the repulsive Hubbard model.
In our approach, we have been guided by the idea already stressed in the work by Abrikosov on the Kondo problem [@ABRIKOSOV], and Nozières [*et al.*]{} on the X-ray edge problem [@NOZIERES1], namely that both particle and hole scattering channels with the local impurity lead to similar singularities which may cancel partially. It is then crucial to treat both in an unbiased way. This is for instance a key feature of the parquet summation, which was developed in references [@ABRIKOSOV] and [@NOZIERES1]. However, this method is not very easy to implement for many body systems, where the angular dependence of interaction vertices is in general not described by a small set of relevant parameters (except in one dimension). A simplification occurs if we work in a truncated Hilbert space with only one particle-hole excitation on top of a local potential (X-ray edge problem), a spin (Kondo problem), or an added electron (Hubbard model). This approach is variational in spirit and amounts to treating, exactly, three-particle correlations amongst the particle-hole pair and the analogue of a local scatterer. It has been used in the context of Kondo problems to study various properties [@YOSIDA; @VARMA1], and for the Hubbard model in the special case of a single overturned spin [@IGARASHI]. It has even been suggested that three-particle correlations between a spin flip and two holes may provide a way to reduce the ground state magnetization away from its maximal value in the infinite-U Hubbard model at any density [@ANDREI]. Three-body correlations are also investigated as a possible microscopic mechanism leading to a marginal Fermi-liquid [@VARMA2].
In this paper then, we shall concentrate on the effect of three-particle correlations on the self-energy of an electron. We consider a variational state which consists of one electron plus at most one extra particle-hole pair excitation on top of an otherwise rigid Fermi sea. The scattering in the particle-particle and particle-hole channels are treated on an equal footing. In this sense we attempt to go beyond references [@RANDERIA] and [@FUKUYAMA] who considered only two-particle scattering. We shall be interested in seeing whether the Fermi-liquid starting point is valid or not. Exact solutions are available for the Luttinger model and the X-ray edge problem and we have decided to compare the electron’s Green function in the three-body approximation to that of the exact solution for these systems. In section 2 we shall discuss the formalism leading to the Faddeev integral equations [@FADDEEV]. We have re-formulated these in order to yield directly the self-energy. Sections 3 and 4 will describe results for the one and two branch Luttinger models respectively and section 5 will describe results for the X-ray problem.
Faddeev Equations
=================
We begin with the definition of the electron Green function, $$G({\bf q},\omega) = \langle N\mid
c_{{\bf q}\uparrow} [\omega - H +i\delta]^{-1}
c^{\dagger}_{{\bf q}\uparrow} +
c^{\dagger}_{{\bf q}\uparrow} [\omega + H -i\delta]^{-1}
c_{{\bf q}\uparrow}\mid N\rangle$$ where the Hamiltonian is $$H = \sum_{{\bf k}} \epsilon_{{\bf k}}
c^{\dagger}_{{\bf k}\sigma}
c_{{\bf k}\sigma}
+ {U\over N_{s}}\sum_{{\bf k},{\bf k}^{\prime},{\bf p}}
c^{\dagger}_{{\bf k}+{\bf p}\uparrow}
c_{{\bf k}\uparrow}
c^{\dagger}_{{\bf k}^{\prime}-{\bf p}\downarrow}
c_{{\bf k}^{\prime}\downarrow}
- E_{0}
\quad .
\label{HAMILTONIAN}$$ We shall consider a variational state where $\mid N\rangle$ is the non-interacting Fermi sea and the Hamiltonian is allowed to create at most one excited particle-hole pair. The corresponding diagrams contributing to the Green function are shown in Fig. \[DIAGRAM\]. We take $q > k_{F}$ and $E_{0}$ is chosen so that $\langle N\mid H\mid N\rangle = 0$. In this case, defining $$\phi\equiv
\langle N\mid c_{{\bf q}\uparrow} [\omega - H +i\delta]^{-1}
c^{\dagger}_{{\bf q}\uparrow}\mid N\rangle$$ and $$\Phi({\bf k},{\bf k}^{\prime})\equiv
\langle N\mid c_{{\bf q}\uparrow} [\omega - H +i\delta]^{-1}
c^{\dagger}_{{\bf k}\uparrow}
c^{\dagger}_{{\bf k}^{\prime}\downarrow}
c_{{\bf k}+{\bf k}^{\prime}-{\bf q}\downarrow}\mid N\rangle
\quad ,$$ the equations of motion of these Green functions truncating terms involving more than three particles are $$[\tilde{\omega} - \epsilon_{{\bf q}}]\phi
- {U\over N_{s}}\sum_{{\bf k},{\bf k}^{\prime}}
\Phi({\bf k},{\bf k}^{\prime})
= 1
\label{MOTIONA}$$ and $$[\tilde{\omega} - \epsilon({\bf k},{\bf k}^{\prime})]
\Phi({\bf k},{\bf k}^{\prime}) - {U\over N_{s}}\phi
-{U\over N_{s}}\sum_{{\bf k}^{\prime\prime}}
\Phi({\bf k}^{\prime\prime},{\bf k}+
{\bf k}^{\prime}-{\bf k}^{\prime\prime})
+{U\over N_{s}}\sum_{{\bf k}^{\prime\prime}}
\Phi({\bf k}^{\prime\prime},{\bf k}^{\prime})
= 0
\label{MOTIONB}$$ where $\tilde{\omega} = \omega - U(N_{\downarrow}/N_{s})$ and $\epsilon({\bf k},{\bf k}^{\prime}) = \epsilon_{{\bf k}}
+ \epsilon_{{\bf k}^{\prime}}
- \epsilon_{{\bf k} + {\bf k}^{\prime} - {\bf q}}$. The momentum restrictions in the sum are such that the arguments of $\Phi ({\bf k}_{1},{\bf k}_{2})$ always satisfy ${\bf k}_{1}\neq {\bf q}$; $k_{1},k_{2} > k_{F}$ and $|{\bf k}_{1}+{\bf k}_{2}-{\bf q}| < k_{F}$.
We shall find it convenient to define the following functions: $$J_{1}({\bf K}) = N_{s}^{-1}\sum_{{\bf k}^{\prime}}
\Phi({\bf k}^{\prime},{\bf K}-{\bf k}^{\prime})$$ and $$J_{2}({\bf K}) = N_{s}^{-1}\sum_{{\bf k}^{\prime}}
\Phi({\bf k}^{\prime},{\bf K})$$ which correspond roughly to T-matrices for the particle-particle and particle-hole scattering channels respectively. From the equations of motion for $\phi$ we may see that $J_{1}$ is related to the self-energy by $$\Sigma({\bf q},{\tilde\omega}) = U^{2}\sum_{{\bf K}}J_{1}({\bf K})
\quad .$$
From the equation of motion for $\Phi$ we see that $J_{1}$ and $J_{2}$ satisfy the coupled integral equations $$J_{1}({\bf K}) =
N_{s}^{-1}
{{F_{1}({\bf K})}\over{1 - UF_{1}({\bf K})}}
- {U\over N_{s}}{{1}\over{1 - UF_{1}({\bf K})}}
\sum_{{\bf k}^{\prime}}
{
{
J_{2}({\bf K}-{\bf k}^{\prime})
}
\over
{
{\tilde\omega} - \epsilon({\bf k}^{\prime},
{\bf K}-{\bf k}^{\prime})
}
}$$ and $$J_{2}({\bf K}) =
N_{s}^{-1}
{{F_{2}({\bf K})}\over{1 + UF_{2}({\bf K})}}
+ {U\over N_{s}}{{1}\over{1 + UF_{1}({\bf K})}}
\sum_{{\bf k}^{\prime}}
{
{
J_{1}({\bf K}+{\bf k}^{\prime})
}
\over
{
{\tilde\omega} - \epsilon({\bf k}^{\prime},{\bf K})
}
}
\quad .$$ In the summations the same momentum restrictions as the equations of motion Eqs. (\[MOTIONA\]) and (\[MOTIONB\]) are in effect. $F_{1}$ and $F_{2}$ are defined by $$F_{1}({\bf K}) = N_{s}^{-1}\sum_{{\bf k}^{\prime}}
{
{1}
\over
{{\tilde\omega}-\epsilon({\bf k}^{\prime},
{\bf K}-{\bf k}^{\prime})}
}$$ and $$F_{2}({\bf K}) = N_{s}^{-1}\sum_{{\bf k}^{\prime}}
{
{1}
\over
{{\tilde\omega}-\epsilon({\bf k}^{\prime},{\bf K})}
}
\quad .$$
The integral equations can be combined into a single one for $J_{1}$ which is the final equation to be solved. $$\begin{aligned}
&J_{1}({\bf K}) =
{1\over N_{s}^{2}}{{1}\over{1 - UF_{1}({\bf K})}}
\sum_{{\bf k}^{\prime}}
{
{
1
}
\over
{
({\tilde\omega} - \epsilon({\bf k}^{\prime},
{\bf K}-{\bf k}^{\prime}))
(1 + UF_{2}({\bf K}-{\bf k}^{\prime}))
}
}
\nonumber
\\
&- {U^{2}\over N_{s}^{2}}{{1}\over{1 - UF_{1}({\bf K})}}
\sum_{{\bf k}^{\prime\prime}}\sum_{{\bf k}^{\prime}}
{
{
J_{1}({\bf k}^{\prime})
}
\over
{
({\tilde\omega} -
\epsilon({\bf k}^{\prime\prime},
{\bf K}-{\bf k}^{\prime\prime}))
(1 + UF_{2}({\bf K}-{\bf k}^{\prime\prime}))
({\tilde\omega} -
\epsilon({\bf k}^{\prime}-{\bf K}
+{\bf k}^{\prime\prime},
{\bf K}-{\bf k}^{\prime\prime}))
}
}
\quad .\end{aligned}$$
The technique of truncating the Green function equations of motion at the four point level and finding closed equations for them is not limited to well defined particles and is not new. It has been applied [@IRKHIN] to Green functions for the Hubbard ‘X’ operators [@HUBBARD].
One–branch Luttinger model
==========================
In this section we present the exact analytical solution of the integral equation for a one branch Luttinger model (i.e. with right movers only). The total momentum of the three particles is $q>k_{F}$ and the energy dispersion is defined by $\epsilon_{k} = v_{F}(k-k_{F})$. The simplification which allows an analytical solution is the fact that $\epsilon(k,k^{\prime})
\equiv \epsilon_{k} + \epsilon_{k^{\prime}}
- \epsilon_{k+k^{\prime}-q}
= v_{F}(q-k_{F})$. Thus we have $$F_{1}(K) =
{
{K-2k_{F}}
\over
{{\tilde\omega}-v_{F}(q-k_{F})}
}$$ and $$F_{2}(K) =
{
{q-K}
\over
{{\tilde\omega}-v_{F}(q-k_{F})}
}
\quad .$$
Defining ${\tilde q}\equiv q-k_{F}$, $$\begin{aligned}
j_{1}(k) \equiv& ({\tilde\omega} - v_{F}{\tilde q})^{-1}J_{1}(k)\\
f_{1}(k) \equiv& ({\tilde\omega} - v_{F}{\tilde q})^{-1}
[1 - UF_{1}(k)]^{-1}\\
f_{2}(k-k^{\prime\prime}) \equiv& ({\tilde\omega} - v_{F}{\tilde q})^{-1}
[1 + UF_{2}(k-k^{\prime\prime})]^{-1}\end{aligned}$$ we absorb a factor of $N_{s}$ into $J_{1}$ and go to the continuum obtaining $$j_{1}(k) = f_{1}(k)\int_{k_{F}}^{k-k_{F}}dk^{\prime\prime}
f_{2}(k-k^{\prime})
\left[
1 - U^{2}\int_{k-k^{\prime\prime}+k_{F}}^{q+k_{F}}
dk^{\prime}j_{1}(k^{\prime})
\right]
\quad .$$
There is, in fact, no small parameter in this integral equation, even when U is small. Thus it cannot be solved perturbatively. This can be demonstrated explicitly by defining $j(k)\equiv {\tilde q}U^{2}j_{1}(k)$ and $x\equiv ({\tilde\omega} - v_{F}{\tilde q})/U{\tilde q}$. Rescaling all momenta with respect to ${\tilde q}$ and setting $k_{F}=0$ we have $$j(k) = {1\over{x-k}}\int_{0}^{k}
dk^{\prime\prime}
{{1}\over{x+1-k+k^{\prime\prime}}}
\left[
1 - \int_{k-k^{\prime\prime}}^{1}
dk^{\prime}
j(k^{\prime})
\right]
\quad .$$
Changing the order of integration allows us to perform one of the nested integrals. Using $$\int_{0}^{k}dk^{\prime\prime}
\int_{k-k^{\prime\prime}}^{1}dk^{\prime}
=
\int_{0}^{k}dk^{\prime}
\int_{k-k^{\prime\prime}}^{k}dk^{\prime\prime}
+
\int_{k}^{1}dk^{\prime}
\int_{0}^{k}dk^{\prime\prime}$$ we find $$\begin{aligned}
&j(k) = {1\over{x-k}}\times \nonumber \\
&\left[
\ln
\left(
{{x+1}\over{x+1-k}}
\right)
-
\int_{0}^{k}dk^{\prime}j(k^{\prime})
\ln
\left(
{{x+1}\over{x+1-k^{\prime}}}
\right)
-
\int_{k}^{1}dk^{\prime}j(k^{\prime})
\ln
\left(
{{x+1}\over{x+1-k}}
\right)
\right]
\quad .\end{aligned}$$
The solution proceeds by conversion into a differential equation. Multiplying by $x-k$ and differentiating with respect to $k$ we have $${d\over{dk}}
\left[
(x-k)j(k)
\right]
=
{{1}\over{x+1-k}}
-
{{1}\over{x+1-k}}
\int_{k}^{1}dk^{\prime} j(k^{\prime})
\quad .$$ Multiplying by $x+1-k$ and differentiating, $${d\over{dk}}
\left[
(x+1-k)
((x-k)j^{\prime}(k)
-
j(k))
\right]
=
j(k)
\quad .
\label{DIFF}$$ This can be integrated because the integral equation gives us the two boundary conditions $$j(0)=0$$ and $$x(x+1)j^{\prime}(0)
=
1 - \int_{0}^{1}dk j(k)
\equiv
1 - (\Sigma(x)/xU{\tilde q})
\quad .$$ Integration of Eq. (\[DIFF\]) results in an equation for $j(k)$ in terms of $x$,$k$, and $j^{\prime}(0)$. Integrating over $k$ from $k=0$ to $k=1$ eliminates $j(k)$ and $k$ and results in an equation for the self-energy in terms of $x$ whose solution is (adding in the imaginary parts now) $$\Sigma(x) =
xU{\tilde q}{\cal A}/(1 + {\cal A})$$ where $${\cal A} =
x^{2}
\ln
\left|
{{x^{2}}
\over
{1-x^{2}}}
\right|
-1
-i\pi x|x|\theta(x+1)\theta(1-x)
\quad .$$ Substituting back into the electron Green function gives the very simple result $$G(x) = x
\ln
\left|
{{x^{2}}
\over
{1-x^{2}}}
\right|
-
i\pi |x|\theta(x+1)\theta(1-x)
\label{SOLUTIONA}$$ where $\theta(x)$ is the usual step function. Re-inserting the original units, $${\rm Im} G({\tilde\omega},{\tilde q}) =
\pi
\left|
{
{{\tilde\omega}-v_{F}{\tilde q}}
\over
{U{\tilde q}}
}
\right|
,\quad
v_{F}{\tilde q} - U{\tilde q}
< {\tilde\omega} <
v_{F}{\tilde q} + U{\tilde q}
\label{SOLUTIONB}$$ The exact Green function is [@VOIT] $${\rm Im} G({\tilde\omega},{\tilde q}) =
\left[
(U{\tilde q})^{2} - ({\tilde\omega}-v_{F}{\tilde q})^{2}
\right]^{-1/2}
,\quad
v_{F}{\tilde q} - U{\tilde q}
< {\tilde\omega} <
v_{F}{\tilde q} + U{\tilde q}
\label{ONEB_EXACT}$$
Our result is compared to the exact result in Fig. \[ONEBRANCH\]. The spectral weight from the three-body approximation is non-zero over the same energy range as the exact solution and has roughly the same shape. Although the two have maxima in the same place, the three-body approximation is not able to reproduce the square root singularities of the exact solution. Nevertheless spin-charge separation is observed. The charge velocity $v_{c} = v_{F} + U$ and the spin velocity $v_{s} = v_{F} - U$ agree with the exact result. From the integral equations one can see that the ‘charge’ peak comes from $[1 - UF_{1}]^{-1}$ and the ‘spin’ peak comes from $[1 + UF_{2}]^{-1}$.
It should be remarked that the one branch model has the same ground state as non-interacting Fermions. Thus spin-charge separation can co-exist with a Fermi-liquid. The range over which Im G is non-zero is proportional to $q-k_{F}$ and this goes to zero as $q\rightarrow k_{F}$. Thus the three-body approximation is self-consistent with the assumption of a rigid Fermi surface background. This can be understood by noting that it is impossible to create particle-hole excitations from the ground state and conserve momentum. It is this feature of the one-branch model which renders the three-body approximation tractable. Below, in the two branch model, this phenomenon does not occur.
Finally we would like to emphasize the non-perturbative nature of the integral equation. This feature may help in shedding light on whether or not the two-dimensional Hubbard model has a Fermi-liquid ground state. Initial calculations indicate that $[1 - UF_{1}]^{-1}$ and $[1 + UF_{2}]^{-1}$ are less singular in two dimensions and almost certainly will not lead to singular behaviour on their own. We hope to discuss this in detail in a future publication.
Two branch Luttinger model
==========================
We consider a spinless Luttinger model for which only the interaction between opposite branches is retained. Again we begin with a particle on the right moving branch but allow the creation of a single particle-hole pair on the left moving branch. In this case the integral equation is (absorbing a factor of $N_{s}$ into $J_{1}$), $$\begin{aligned}
J_{1}(K) =
{{1}\over{1 - UF_{1}(K)}}
\sum_{k^{\prime\prime} = K+k_{F}}^{D}
{
{
1
}
\over
{
\left(
{\tilde\omega}
- \epsilon(k^{\prime\prime},K-k^{\prime\prime})
\right)
\left(
1 + UF_{2}(K-k^{\prime\prime})
\right)
}
}
\nonumber
\\
- {{U^{2}}\over{1 - UF_{1}(K)}}
\left[
\sum_{k^{\prime} = q-k_{F}}^{K}
\sum_{k^{\prime\prime} = K+k_{F}}^{D}
+
\sum_{k^{\prime} = K}^{D-k_{F}}
\sum_{k^{\prime\prime} = K+k_{F}}^{D+K-k^{\prime}}
\right]
\nonumber \\
\times
{
{
J_{1}(k^{\prime})
}
\over
{
({\tilde\omega} -
\epsilon(k^{\prime\prime},K-k^{\prime\prime}))
(1 + UF_{2}(K-k^{\prime\prime}))
({\tilde\omega} -
\epsilon(k^{\prime}-K+k^{\prime\prime},
K-k^{\prime\prime}))
}
}\end{aligned}$$ where the kinetic energy $\epsilon(k^{\prime},k-k^{\prime})
= (2k^{\prime} - q - k_{F})v_{F}$ (It doesn’t depend on $k$ because the total momentum must sum to $q$.), $$F_{1}(k) = \sum_{k^{\prime} = k+k_{F}}^{D} [{\tilde\omega}-
\epsilon(k^{\prime},k-k^{\prime})]^{-1}$$ and $$F_{2}(k) = \int_{k^{\prime} = -k+{\tilde q}}^{D} [{\tilde\omega}-
\epsilon(k^{\prime},k)]^{-1}\quad .$$ D is a momentum cutoff.
We were not able to solve this analytically, mostly because of the intractability of integrals of the form $\int dz{\rm ln}(z)/(z+a)$, and therefore attempted a numerical solution by discretization. In order to control the divergences of the free particle poles we introduced artificial widths. That is, we replaced $${1\over{{\tilde\omega}-{\tilde q}}}\rightarrow
{1\over{{\tilde\omega}-{\tilde q}+i\delta}}$$ We checked that our results did not depend significantly on the value of $\delta$. The method of numerical solution was checked against the analytical solution, Eq. (\[SOLUTIONA\]) for the one-branch model, to confirm its accuracy.
The exact Green function for this problem is [@VOIT] $$\begin{aligned}
&{\rm Im} G({\tilde q},{\tilde w}) =
-
{
\pi
\over
{2D^{\prime}v\Gamma(a)^{2}}
}
\nonumber\\
&\times
\left[
\theta({\tilde\omega}-v{\tilde q})
\gamma(a,y_{+})
e^{-y_{-}}
y_{-}^{a-1}
+
\theta({-\tilde\omega}-v{\tilde q})
\gamma(a,-y_{+})
e^{y_{-}}
(-y)_{-}^{a-1}
\right]\end{aligned}$$ where $y_{\pm} = ({\tilde\omega}\pm v{\tilde q})/2D^{\prime}v$, $v = \sqrt{v_{F}^{2}-U^{2}}$ and $$a = {1\over 4}(\sqrt{{v_{F}-U}\over{v_{F}+U}}
+
\sqrt{{v_{F}+U}\over{v_{F}-U}}
-
2)
\quad .$$ The parameter $D^{\prime}$ is a momentum cutoff. In a perturbative expansion of the integral equations of the three-body approximation the parameter $({\tilde\omega} - v_{F}{\tilde q})/2Dv_{F}$ appears. In order to compare the three-body results to the exact results we need definite values and it seems reasonable to take $D^{\prime}=D$ for this purpose so that the three-body solution and the exact solution have the same dimensionless parameter $({\tilde\omega} - v_{F}{\tilde q})/2Dv_{F}$. ${\rm Im} G({\tilde q},{\tilde\omega})$ (for the exact and three-body solution) as a function of energy with fixed total momentum are superimposed in Fig. \[TWOBRANCH\]. Energies are normalized by ${\tilde q}v_{F}$. We chose a value $U/v_{F} = 0.3$ which is large enough so that the effects of $U$ are apparent but not so large that the integral equations do not converge. The cutoff is $(D-k_{F})/{\tilde q} = 3$ and is not a sensitive parameter.
Some differences are immediately evident. Firstly the exact solution has spectral weight at ${\tilde\omega} < -v{\tilde q}$ (which is not shown in the figure). The three-body approximation misses this. In order to describe this part of the spectrum we would have to include pre-existing particle and hole pair excitations in the N-particle ground state whereas the three-body approximation assumes that the background is a rigid Fermi sea. A background of pre-excited particle-hole pairs would require consideration of a minimum of five bodies. The second difference is that the spectral weight of the exact solution is extremely concentrated just above the maximum of the spectral function. That is because, for small U, Im G diverges with a negative exponent only slightly greater than $-1$. The exact solution displays an interesting characteristic. For energies $|{\tilde\omega}| < v{\tilde q}$ the spectral weight is exactly zero. We do not know how to explain this in terms of the underlying electrons. This feature is not present in the three-body approximation. Nevertheless the spectral function in that case is asymmetric. In Fig. \[TWOBRANCH\] the reflection of the spectral function about its maximum is plotted to bring out the asymmetry. We see that the asymmetry is in the right direction. That is, the spectral weight is higher for energies above the maximum. By looking at the numerical solution integral equations it is possible to tell that a pole in $[1 + F_{2}]^{-1}$ (the particle - hole scattering channel) is the cause of this asymmetry.
Another feature which is reproduced qualitatively is the negative energy shift of the peak position (real part of the self-energy). In the exact solution this manifests itself in a downward renormalization of the Fermi velocity $v = \sqrt{v_{F}^{2} - U^{2}}$. In Fig. \[SHIFT\] the shifts in the maxima of Im G as a function of U are plotted. It is interesting to note that the three-body approximation follows the square root behaviour of the exact solution quite accurately except for a factor of two.
To summarize, the three-body approach reproduces qualitatively the real part of the self-energy and the asymmetry of the spectral function. Although it is not shown in Fig. \[TWOBRANCH\], in the three-body approximation the spectral weight of an electron with momentum $q>k_{F}$ stretches into negative energy, with a width proportional to $q-k_{F}$. However, the integrated spectral weight for negative energies seems to be independent of $q-k_{F}$ at small $q-k_{F}$ and amounts to a few percent of the total spectral weight (for the given choice of parameters): comparable to that of the exact solution. This flow of spectral weight can be interpreted by saying that the three-body approximation is inducing some correlations which tend to improve the trial ground state. This confirms the idea that correcting the N-particle ground state by coherent particle-hole pair excitations would yield a Green function more like the exact solution.
X-ray problem
=============
As discussed in the introduction, our goal was to set up an approximation scheme which treats particle-particle and particle-hole interactions on the same footing. Formally, this idea was first discussed in condensed matter physics for the Kondo problem [@ABRIKOSOV], and for the simpler X-ray edge problem [@NOZIERES1]. In this section, we present the results of our simple three-body type of approach for the X-ray edge problem. At this point, it would be helpful to characterize better the expected limitations of our method. The quantity of interest here is the overlap between the wave function of the conduction electron system, after the sudden turning on of a localized potential, and the unperturbed Fermi sea. Going beyond the first calculation by Mahan [@MAHAN], Nozières and coworkers have shown that the deep level Green’s function defined as above decays as a power-law function of time [@NOZIERES2], with an exponent proportional to the square of the phase shift at the Fermi energy due to the local potential. This result has been interpreted in terms of Anderson’s orthogonality catastrophe [@CATASTROPHE; @HOPFIELD], and rederived in the simpler language of Tomonaga bosons [@SCHOTTE]. As stressed by many authors, the infrared singularity arises from the excitation of a logarithmically divergent number of Tomonaga bosons. However, the average number of such emitted bosons increases only logarithmically with time, which might enable us to restrict ourselves to the subspace with no more than one excited boson, at least for times shorter than $\frac{1}{W} \exp{(W/2V^{2})}$, where W is the conduction electron bandwidth and V is the strength of the localized potential.
The Hamiltonian for the X-ray problem is $$H = {\cal E}_{0}bb^{\dagger}
+ \sum_{\bf k}\epsilon_{\bf k}
a^{\dagger}_{\bf k}a_{\bf k}
- E_{0}
+ bb^{\dagger}\sum_{{\bf k}{\bf k}^{\prime}}
V_{{\bf k}^{\prime},{\bf k}}
a^{\dagger}_{{\bf k}^{\prime}}a_{\bf k}\quad .$$ $a_{k}$ is a spinless Fermion annihiliation operator and ${\cal E}_{0}$ is the energy of the deep hole. As in the Hamiltonian \[HAMILTONIAN\], $E_{0}$ is chosen so that the unperturbed Fermi sea with the deep level occupied is defined to have zero energy. Let $|N\rangle$ be the unperturbed Fermi sea and $|0\rangle \equiv b|N\rangle$ be the unperturbed sea with the deep hole. We shall be interested in the Green function $${\cal G}(t>0) = -i\langle N|e^{iHt}b^{\dagger}e^{-iHt}b|N\rangle
= -ie^{-i{\cal E}_{0}t}\langle 0|e^{-iHt}|0\rangle$$
We again set up the three-body approximation using the equation of motion formalism. Defining $|t\rangle \equiv \exp{(-iHt)}|0\rangle$, we make the approximation $$|t\rangle \cong \phi(t)|0\rangle +
\sum_{k>k_{F}}\sum_{k^{\prime}<k_{F}}
\Phi({\bf k},{\bf k}^{\prime},t)
a^{\dagger}_{\bf k}a_{{\bf k}^{\prime}}
|0\rangle$$ with the truncated equations of motion $$i{{\partial\phi}\over{\partial t}}(t) =
{\cal E}_{0}\phi(t) +
\sum_{k,k^{\prime}} V_{k^{\prime},k}
\Phi(k,k^{\prime},t)$$ and $$\begin{aligned}
i{{\partial \Phi}\over{\partial t}}(k,k^{\prime},t) =
&({\cal E}_{0} + \epsilon_{k} - \epsilon_{k^{\prime}})
\Phi(k,k^{\prime},t)
+ \sum_{k^{\prime\prime}}V_{k,k^{\prime\prime}}
\Phi(k^{\prime\prime},k^{\prime},t)\nonumber\\
&- \sum_{k^{\prime\prime}}V_{k^{\prime\prime},k^{\prime}}
\Phi(k,k^{\prime\prime},t)
+ V_{k,k^{\prime}}\phi(t) \quad .\end{aligned}$$ We proceed by expressing $\Phi({\bf k},{\bf k}^{\prime},t)$ in the basis of single-particle eigenstates in the presence of scattering $V_{k,k^{\prime}}$, $$\Phi(k,k^{\prime},t) = \sum_{\alpha ,\beta}\Phi_{\alpha ,\beta}(t)
\phi^{p}_{\alpha}(k)
\phi^{h}_{\beta}(k^{\prime})$$ in which case the second equation of motion may be written $$i\sum_{\alpha ,\beta}
\phi^{p}_{\alpha}(k)
\phi^{h}_{\beta}(k^{\prime})
{\dot\Phi}_{\alpha ,\beta}(t)
=\sum_{\alpha ,\beta}
(E^{p}_{\alpha} + E^{h}_{\beta} + {\cal E}_{0})
\phi^{p}_{\alpha}(k)
\phi^{h}_{\beta}(k^{\prime})
\Phi_{\alpha ,\beta}(t)
+
V_{k,k^{\prime}}
\phi(t)\quad .$$ Using the orthogonality of this basis we have $$\Phi_{\alpha ,\beta}(t)
=
-i\int_{0}^{t}dt^{\prime}
\exp{
\left[
-i(E^{p}_{\alpha} + E^{h}_{\beta} + {\cal E}_{0})(t-t^{\prime})
\right]
}
\phi(t^{\prime})
\sum_{k,k^{\prime}}
\phi^{p*}_{\alpha}(k)
\phi^{h*}_{\beta}(k^{\prime})
V_{k,k^{\prime}}$$
Combining this with the first equation of motion written in the same basis and defining $\phi(t)\equiv\exp{(-i{\cal E}_{0}t)}
{\tilde\phi}(t)$, $${\tilde\phi}(t) = 1 + \int_{0}^{t}dt^{\prime}
\int_{0}^{t^{\prime}}dt^{\prime\prime}
\sum_{k,k^{\prime}}\sum_{q,q^{\prime}}
V_{k^{\prime},k}
V_{q,q^{\prime}}
G_{k,q}^{p}(t^{\prime}-t^{\prime\prime})
G_{k^{\prime},q^{\prime}}^{h}(t^{\prime}-t^{\prime\prime})
{\tilde\phi}(t^{\prime\prime})
\label{PHITILDE}$$ where $$G_{k,q}^{p,h}(t)
=
\sum_{\alpha}
e^{-iE_{\alpha}^{p,h}(t)}
\phi^{p,h}_{\alpha}(k)
\phi^{p,h*}_{\alpha}(q)$$ are the particle and hole Green functions in the presence of the deep hole. Upon Fourier transforming Eq. (\[PHITILDE\]) and solving we find that ${\tilde\phi}(\omega) = i/\left(\omega - \Sigma(\omega)\right)$ where the self-energy $\Sigma$ is given by $$\Sigma(\omega) = i\sum_{k,k^{\prime}}\sum_{q,q^{\prime}}
V_{k^{\prime},k}
V_{q,q^{\prime}}
\int {{d\omega^{\prime}}\over{2\pi}}
G_{k,q}^{p}(\omega^{\prime})
G_{k^{\prime},q^{\prime}}^{h}(\omega - \omega^{\prime})$$ We take the continuum limit and treat the deep hole as a point scatterer thus dropping momentum dependence in V. The self-energy becomes $$\Sigma = iV^{2}
\int {{d\omega^{\prime}}\over{2\pi}}
G^{p}(\omega^{\prime})
G^{h}(\omega - \omega^{\prime})$$ where $G^{p,h}$ are now the on-site Green functions given by $$G^{p}(\omega) =
{
{
(1/2Dv_{F}){\rm ln}
\left[
(\omega - v_{F}k_{F} + i\delta)
/
(\omega - v_{F}D + i\delta)
\right]
}
\over
{
1 -
(V/2Dv_{F}){\rm ln}
\left[
(\omega - v_{F}k_{F} + i\delta)
/
(\omega - v_{F}D + i\delta)
\right]
}
}$$ and $$G^{h}(\omega) =
{
{
(1/2Dv_{F}){\rm ln}
\left[
(\omega + v_{F}k_{F} + i\delta)
/
(\omega - v_{F}D + i\delta)
\right]
}
\over
{
1 +
(V/2Dv_{F}){\rm ln}
\left[
(\omega + v_{F}k_{F} + i\delta)
/
(\omega - v_{F}D + i\delta)
\right]
}
}\quad .$$ D is the usual momentum cutoff.
In Fig. \[IMG\] we have plotted the imaginary part of the particle and hole Green functions. Note the weak logarithmic singularities which result from single particle bound states. In Fig. \[SELF\] is plotted the resulting real and imaginary parts of the self-energy. The critical features in the latter plot are the sharpness of the cusp in ${\rm Im}\Sigma(\omega)$, and the steepness of the inflection point of ${\rm Re}\Sigma(\omega)$ near $\omega\sim 0$. As this numerical calculation shows, the weak logarithmic divergences of the single particle spectral density do not have a great effect on the self-energy.
In view of this, we proceed with an analytical calculation of the quasiparticle residue which replaces the spectral functions plotted in Fig. \[IMG\] by the constant $-(2Dv_{F})^{-1}$ multiplied by appropriate step functions. For small $\omega$, $$\begin{aligned}
{\rm Re}\Sigma(\omega) \cong
(V/2Dv_{F})^{2}
&\left[
v_{F}(D-k_{F}){\rm ln}
\left(
{
{D-k_{F}}
\over
{2D}
}
\right)
+
v_{F}(D+k_{F}){\rm ln}
\left(
{
{D+k_{F}}
\over
{2D}
}
\right)
\right.\nonumber\\
&\left.
+
\omega
{\rm ln}
\left(
{
{|\omega|2Dv_{F}}
\over
{v_{F}(D-k_{F})v_{F}(D+k_{F})}
}
\right)
\right]
\quad .\end{aligned}$$
The pole in the deep hole Green function occurs at $\omega_{p}={\rm Re}\Sigma(\omega_{p})$ which is $$\omega_{p} =
-
{
{V^{2}}
\over
{2Dv_{F}}
}
\left({\rm ln}2 + {\cal O}(k_{F}/D)^{2}\right)$$ in the $k_{F} << D$ limit. The residue of the deep level pole is $$Z_{p} \equiv
\left[
1 - \left.{{\partial}\over{\partial\omega}}
{\rm Re}\Sigma(\omega)\right|_{\omega_{p}}
\right]^{-1}
\cong
1 - (V/2Dv_{F})^{2}
{\rm ln}
\left(
{
{v_{F}^{2}D^{2}}
\over
{V^{2}{\rm ln}2}
}
\right)
\quad .$$ These expressions have been quantitatively checked with a numerical calculation.
This calculation shows that the simple three-body approach does not reproduce the power-law singularity of the deep level Green’s function at low energy. Instead of a power-law decay at long times, we obtain a long-lived level, with a residue close to unity at small coupling. This result is by itself not too surprising. It simply confirms the intuitive idea that the possibility of exciting many particle-hole pairs should be retained, in order to describe the power-law singularities. In fact, this feature is present in the parquet diagram approach used by Abrikosov and Nozières and coworkers [@ABRIKOSOV; @NOZIERES1]. The similarity between a parquet calculation and our simple three-body approach arises because in the former the three-particle vertex function involves only successive two-particle interactions. However, since the same parquet diagram corresponds to several time orderings for the interaction events, the actual number of particle-hole pairs is not fixed to unity. Unfortunately, our simplified approach does not seem to provide a way to bypass the cumbersome parquet algebra. The main complication encountered upon going from the Faddeev approach, with fixed particle number, to the parquet algebra in the full Hilbert space, is reflected by the need to keep frequencies as additional integration variables in the internal lines of the graphs. For the X-ray edge problem with a separable potential, momentum integrations are staightforward and we recover a one-dimensional problem (in the time direction). For more realistic systems, such as the two-dimensional Hubbard model, such a simplification is not so obvious. This is why our simpler method may still be useful there, since it may for instance be able to indicate the presence of spin and charge decoupling. We hope to be able to address this issue in a forthcoming investigation.
The authors wish to thank J. Voit for discussions concerning exact Green functions of the Luttinger model.
Present Address: AECL Research, Chalk River Laboratories, Chalk River, Ontario, Canada, K0J 1J0. E-mail: hsut@cu26.crl.aecl.ca E-mail: doucot@crtbt.polycnrs-gre.fr P. W. Anderson, Phys. Rev. Lett. [**64**]{}, 1839 (1990); P. W. Anderson and Y. Ren, in [*High Temperature Superconductivity, Proceedings of the Los Alamos conference*]{}, edited by K. Bedell [*et al.*]{} (Adison Wesley, Reading, MA, 1990); P. W. Anderson, Prog. Theo. Phys. Suppl. No. 107, 41 (1992). F. D. M. Haldane, J. Phys. C. [**14**]{}, 2585 (1981). P. W. Anderson, Phys. Rev. Lett. [**67**]{}, 2092 (1991). P. W. Anderson, Phys. Rev. Lett. [**65**]{}, 2306 (1991). J. R. Englebrecht and M. Randeria, Phys. Rev. B [**45**]{}, 12419 (1992). H. Fukuyama, Y. Hasegawa and O. Narikiyo, J. Phys. Soc. Jpn., [**60**]{}, 2013 (1991). A. A. Abrikosov, Physics [**2**]{}, 5 (1965). B. Roulet, J. Gavoret, P. Nozières, Phys. Rev. [**178**]{}, 1072 (1969). K. Yosida, Phys. Rev. [**147**]{}, 223 (1966). C. M. Varma and Y. Yafet, Phys. Rev. [**B 13**]{}, 2950 (1976). J. Igarashi, J. Phys. Soc. Jpn., [**52**]{}, 2827 (1983); J. Igarashi, J. Phys. Soc. Jpn., [**54**]{}, 260 (1985). Y. Fang, A. E. Ruckenstein, E. Dagotto, and S. Schmitt-Rink, Phys. Rev. [**B 40**]{}, 7406 (1989); A. E. Ruckenstein and S. Schmitt-Rink, Int. J. Mod. Phys. B [**3**]{}, 1809 (1989). A. E. Ruckenstein and C. M. Varma, Physica C, [**185-189**]{}, 134 (1991); C. M. Varma, in [*Strongly Interacting Fermions and High ${\rm T_c}$ Superconductivity, Proceedings of the Les Houches Summer School, August 1991*]{}, in press. L. D. Faddeev, Soviet Physics JETP, [**12**]{}, 1014 (1961). See for example V. Y. Irkhin and M. I. Katsnelson, J. Phys. C: Solid State Phys. [**18**]{}, 4173 (1985). J. Hubbard, Proc. R. Soc. A [**285**]{}, 542 (1965). J. Voit, unpublished. G. D. Mahan, Phys. Rev. [**163**]{}, 612 (1967). P. Nozières, J. Gavoret, and B. Roulet, Phys. Rev. [**178**]{}, 1084 (1969); P. Nozières, C. T. De Dominicis, Phys. Rev. [**178**]{}, 1097 (1969). P. W. Anderson, Phys. Rev. Lett. [**18**]{}, 1049 (1967). J. J. Hopfield, Comments on Solid State Phys. [**2**]{}, 40 (1969). K. D. Schotte, U. Schotte, Phys. Rev. [**182**]{}, 479 (1969).
|
---
abstract: 'If strange quark matter is the true ground state of matter, it must have lower energy than nuclear matter. Simultaneously, two-flavour quark matter must have higher energy than nuclear matter, for otherwise the latter would convert to the former. We show, using an effective chiral lagrangian, that the existence of a new lower energy ground state for two-flavour quark matter, the pion condensate, shrinks the window for strange quark matter to be the ground state of matter and sets new limits on the current strange quark mass.'
author:
- Vikram Soni
- Dipankar Bhattacharya
title: A new window on Strange Quark Matter as the ground state of strongly interacting matter
---
\[sec:intro\]Introduction
=========================
The hypothesis that the true ground state of baryonic matter may have a roughly equal fraction of u,d and s quarks, termed strange quark matter (SQM) is of recent origin [@ref1]. This is based on the fact that at some density, when the down quark chemical potential is larger than the strange quark mass, conversion to strange quarks can occur. This reduces the energy density by having 3 (u, d and s) fermi seas instead of just 2 (u, d), and can yield a state of energy lower than nuclear matter. It is also possible to explain why such a state has escaped detection.
This involves at least two puzzles.
- Why does ordinary 2 flavour nuclear matter, the observed ground state of baryonic matter, not decay into strange quark matter.
The answer is that this decay is not like the radioactive decay of unstable nuclei, as the nucleons cannot decay one by one as it is not energetically favourable for the nucleon to change into a $\Lambda$, but only for the entire nuclear matter to transmute into strange quark matter and this requires a high order of the flavour changing weak interaction which renders the cross section to be exponentially and unobservably small.
- Why was this matter not created in the evolution of the universe?
This is due to the fact that as the universe cooled past a temperature equivalent to the strange quark mass, strange quark matter was not the chosen state of high entropy. Since the u and d quarks have almost neglible masses at this scale, as the temperature dropped further the strange quarks were Boltzmann suppressed leaving just the u and d quarks, which as we know converted largely into nucleons. For details we refer the reader to [@ref1; @ref2; @ref3].
It is really quite remarkable that the ground state cannot be realised easily! Only if we can produce high baryon density by compression can SQM be realised - for example, in the interior of neutron stars.
We now turn to the theoretical underpinning of the case for SQM being the potential ground state of matter.
We already know, empirically as well as theoretically, the ground state energy of saturation nuclear matter - 930 MeV for the Fe$^{56}$ nuclei. However, for calculating quark matter we take recourse to phenomenological models, which are pointers but foundationally inadequate and here lies the uncertainty.
The usual ground state calculation for SQM treats the quarks as a free fermi gas of current quarks. The volume in which these quarks live comes at a cost of a constant energy density that provides ‘confinement.’ It is equivalently the same constant value of negative pressure and hence is often called the bag pressure term. This is a simple extension of the MIT bag philosophy, where the origin of the constant energy density is the fact that quarks are confined. The bag pressure sets the equilibrium or ground state energy density and the baryon density. It can be fixed from the nucleon sector. Further structure can be introduced by adding interaction between the quarks, e.g., one gluon exchange. Such a phenomenological model has been used by Witten [@ref1] and later by Farhi and Jaffe [@ref2] and others for SQM (see [@ref3] for a review).
Chiral Symmetry
===============
It is clear that such a model is phenomenological and does not, for example, address the issue of the spontaneous breaking of chiral symmetry – an essential feature of the strong interactions.
The quark matter in the bag is in a chirally restored state. This means that as in the case of Superconductivity it costs energy to expel the chiral condensate which characterises the true vacuum state. Clearly, this will act just like the bag energy density/pressure. However, its value will be determined by the energy density of the chiral condensate. Such a term binds but does not confine. Confinement thus requires further input than just a bag pressure.
All results for the SQM state will depend on the model that is used to describe it and the ground state thereof. In a chiral model we find that there is a plurality of ground states. Of these, we find that one particular ground state has the property of chiral restoration at high density and parallels the MIT bag state used in most previous estimates, where the ground state is a fermi sea of current quarks (chirally restored quark matter or CRQM) with the bag pressure provided by absence of the chiral condensate. This regime sets the connection between the paramaters of the chiral model and the MIT bag model used in [@ref1; @ref2; @ref3].
Unlike for the MIT bag case where the bag pressure is a parameter, in our formulation, it is the chiral condensate energy and is given in terms of the parameters of low energy phenomenology - the pion decay constant, $f_\pi$, which is precisely known and the scalar coupling or the $\sigma$ mass, which is rather poorly ‘known’.
There are, however, other ground states for this model, in which the pattern of symmetry breaking is different at high density, for example, the pion condensed (PC) ground state in which the chiral symmetry is still spontaneously broken at high density. Such a state has lower ground state energy than the former and thus needs to be considered in the description of quark matter. As we show it is found to influence the regime of existence of SQM importantly.
Effective chiral lagrangian
---------------------------
We consider this issue in the framework of an intermediate Chiral Lagrangian that has chiral SSB. Such an effective Lagrangian has quarks, gluons and a chiral multiplet of $[\vec\pi ,\sigma ]$ that flavor couples only to the quarks. For $SU(2)_L \times SU(2)_R$ chiral symmetry, we have
$$L = - \frac{1}{4} G^a_{\mu \nu} G^a_{\mu \nu} \\
- \sum {\overline{\psi}} \left( D + g_y(\sigma + \\
i\gamma_5 \vec \tau \vec \pi)\right) \psi \\
- \frac{1}{2} (\partial \\
\mu \sigma)^2 - \frac{1}{2} (\partial \mu \vec \pi)^2 \\
- \frac{\lambda^2}{4}(\sigma^2 + \vec \pi^2 - (f_\pi)^2)^2 \\$$
The masses of the scalar (PS) and fermions follow on the minimization of the potentials above. This minimization yields $$\qquad <\sigma>^2 = f_\pi^2$$ where $f_\pi$ is the pion decay constant. It follows that $$\qquad m^2_{\sigma} = 2\lambda^2( f_\pi)^2
\qquad m_q = m= g <\sigma> = g f_\pi$$ This theory is an extension of QCD by additionally coupling the quarks to a chiral multiplet, $(\vec\pi$ and $\sigma)$ [@ref4; @ref5; @ref6].
This Lagrangian has produced some interesting physics at the mean field level [@ref6; @ref7]:
1. It provides a quark soliton model for the nucleon in which the nucleon is realized as a soliton with quarks being bound in a skyrmion configuration for the chiral field expectation values [@ref5; @ref6].
2. Such a model gives a natural explanation for the ‘proton spin puzzle’. This is because the quarks in the background fields are in a spin, isospin singlet state in which the quark spin operator averages to zero. On the collective quantization of this soliton to give states of good spin and isospin the quark spin operator acquires a small non zero contribution [@ref8].
3. Such a Lagrangian also seems to naturally produce the Gottfried sum rule [@ref9].
4. Such a nucleon can also yield from first principles (but with some drastic QCD evolution), structure functions for the nucleon which are close to the experimental ones [@ref10].
5. In a finite temperature field theory such an effective Lagrangian also yields screening masses that match with those of a finite temperature QCD simulation with dynamical quarks [@ref11].
6. This Lagrangian also gives a consistent equation of state for strongly interacting matter at all density [@ref12; @ref6].
We shall first briefly establish the parameters of the above effective Lagrangian and the specific connection with the MIT bag model of confinement used in previous treatments of SQM.
As already pointed out above, the nucleon in this model is realised as a soliton in a chiral symmetry broken background with quark bound states [@ref5; @ref6; @ref7]. This sets the value of the yukawa coupling, $g$, required to fit the nucleon mass in a Mean Field Theory (MFT) treatment to be, $g = 5.4$.
For the nucleon the dependence on the scalar coupling, $\lambda$, is marginal as long as it is not too small. Further, in MFT, the QCD coupling does not play a role; only if 1 gluon exchange is included does the QCD coupling enter.
There are no other parameters except $f_\pi$, the pion decay constant which is set to 93 MeV.
Preliminaries for SQM
=====================
The connection to MIT bag description of quark matter is set as follows. The last term in the above lagrangian, the potential functional, $$\frac{\lambda^2}{4} (\sigma^2 + \vec \pi^2 - (f_\pi)^2)^2$$ is minimized by the VEV’s $$(<\sigma> = f_\pi, <\vec \pi>= 0 )$$ and is equal to zero at the minimum.
In MFT at high density (as we shall see), when chiral symmetry is restored, $(<\sigma>=0$, $<\vec\pi>=0)$, this term reduces to a constant energy density term equal to $$\frac{\lambda^2}{4} (f_\pi)^4$$ Besides, due to chiral symmetry restoration the constituent mass of the quarks also vanishes, leaving free massless quarks. This reduced lagrangian for high density is no different from MIT bag quark matter with $$B =\frac{\lambda^2}{4} (f_\pi)^4$$ This completes the identification of the bag pressure term in this model. It shows that bag pressure is automatically generated by chiral restoration and is controlled simply by the scalar coupling or equivalently the sigma mass.
We first briefly describe the logical basis for the investigation of the QM ground states vis a vis the usual nuclear matter ground state at saturation density. Here we follow Farhi and Jaffe [@ref2].
1. We fix coordinates by noting that SQM can be the true ground state only if its energy per baryon, $E_B$, is lower than the lowest value found in nuclei, 930 MeV for iron, as done by Farhi and Jaffe, [@ref2].
2. We calculate the 2 flavour quark matter ground states and fix a lower bound for the only free parameter in our lagrangian, the scalar coupling; or equivalently, we get a lower bound on the chiral condensate pressure (or bag pressure) from the condition that the 2 flavour quark matter state must have higher $E_B$ than nuclear matter - otherwise nuclear matter would be unstable to conversion to the 2 flavour QM. As pointed out in [@ref2] this condition is that bulk 2 flavour quark matter must have $E_B >934$ MeV.
3. We calculate the SQM with the parameters established in 2 above and see if for SQM, $E_B$ is smaller than that given in 1, above. If this is the case, and as $E_B$ increases monotonically with the scalar coupling (or the chiral condensate pressure), we get an upper bound on the chiral condensate pressure (or bag pressure), when $E_B$ crosses beyond 930 MeV. SQM can then exist, as the true ground state, in this interval between the two bounds.
Two Flavour Quark Matter
========================
We shall now consider in Mean Field Theory the phases of 2 flavour quark matter in the $ SU(2)_L \times SU(2)_R $ chiral model above. We shall then extend the model to 3 flavours (u, d, s) to describe SQM.
The space uniform phase
-----------------------
We now turn to the phase in which the pattern of symmetry breaking is such that the expectation values of the meson fields are uniform. At zero density they are just the VEVs. $$\begin{aligned}
<\sigma> &=& f_{\pi} \\
<\vec \pi> &=& 0 \end{aligned}$$ For arbitrary density we allow the expectation value to change in magnitude, as it becomes a variational parameter that is determined by energy minimization at each density. $$\begin{aligned}
<\sigma> &=& F \\
<\vec\pi> &=& 0\end{aligned}$$ Such a pattern of symmetry breaking simply provides a constituent mass to the quark $ m = g<\sigma> = gF $ and the quarks are in plane wave states as opposed to the bound states in the nucleonic phase [@ref12].
The mean field description of this phase is simple.
The energy density $$\epsilon_{\rho} = \Sigma_{u,d} \frac{1}{(2\pi)^3} \gamma
\int d^3k \sqrt{m^2 + k^2} + \frac{\lambda^2}{4} (<\sigma^2>
-f_{\pi}^2)^2$$ where $m = g<\sigma> = gF$ and the degeneracy $\gamma = 6$. We shall use $g = 5.4$ as determined from fixing the nucleon mass in this model at 938 MeV [@ref5; @ref6]. The integral above runs up to the ‘u’ and ‘d’ fermi momenta.
For neutron matter (without $\beta$ equilibrium) we have the relations $$\begin{aligned}
k^f_u &=& (\pi^2 n_u)^{\frac{1}{3}} = (\pi^2\rho_B)^{\frac{1}{3}} \\
k^f_d &=& (2 \pi^2\rho_B)^{\frac{1}{3}} \\
E_B &=& \frac{\epsilon_{\rho}}{\rho_B} \end{aligned}$$ where $\rho_B$ is the baryon density. At any density the ground state follows from minimising the free energy w.r.t. $<\sigma> = F$.
As shown in the figures of [@ref12; @ref13], this phase begins at zero $\rho_B$, with $E_B = 3gf_\pi$, which then falls till chiral restoration occurs at some $\rho_X$. After this, as density is increased the $E_B$ continues to drop and goes to a minimum and then starts rising corresponding to a massless quark fermi gas.
In the chirally restored phase the EOS is very simple and parallels the MIT bag description of [@ref3] $$\begin{aligned}
\rho_B &>& \rho_X\\
\epsilon_{\rho}
&=& \frac{3}{4\pi^2} {\pi^2\rho_B}^\frac{4}{3}\alpha
+ \frac{\lambda^2}{4} f_{\pi}^4 \end{aligned}$$ The last term above is just the bag energy density, and $$\alpha = (1 + 2^{\frac{4}{3}})$$ This phase has two features, a chiral restoration at $\rho_X$ followed, with increasing density, by an absolute minimum in $E_B$, at $\rho_C >\rho_X$
Since $E_B$ decreases monotonically with density till the chiral restoration density, $\rho_X$, and then continues to decrease till the minimum is reached at $\rho_C$, this implies that the density regime till $\rho_C$ is unstable and has negative pressure. This has been recently conjectured [@ref14] as the density at which self-bound droplets of quarks form, which may be related to nucleons. Further, since at this density chiral symmetry is restored, these ‘nucleons’ will be like those in the MIT bag model in which chiral symmetry is unbroken inside the nucleons.
We would like to clarify this issue.
From the comparison of this phase with the nucleon and nucleonic ‘phase’ arising from the same model (see [@ref6; @ref12]), it is clear that the nucleonic phase is always of lower energy than the uniform phase above, upto a density of roughly 3 times the nuclear density, which is above the chiral restoration density in the uniform phase. Further, the minimum in the nucleonic phase occurs very much below the minimum in the uniform phase.
The chiral restoration density in the uniform phase is thus not of any physical interest as matter will always be in the lower energy nucleonic phase and so the identification of nucleon as a quark droplet at the density at which the minimum occurs in the uniform phase is not viable. Clearly, the nucleon is a quark soliton of mass M =938 MeV and falls at the zero density limit in the nucleonic phase.
The Pion Condensed phase
------------------------
Here we shall consider another realization of the expectation value of $<\sigma>$ and $<\vec\pi>$ corresponding to pion condensation. This phenomenon was first considered in the context of nuclear matter.
Such a phenomenon also occurs with our quark based chiral $\sigma$ model and was first considered at the Mean Field Level by Kutschera and Broniowski in an important paper [@ref13]. Working in the chiral limit they found that the pion condensed state has lower energy than the uniform symmetry breaking state (phase 2) we have just considered for all density. This is expected, as the ansatz for the PC phase is more general than for phase 2.
The expectation values now carry a particular space dependence $$\begin{aligned}
<\sigma> &=& F \cos{(\vec q. \vec r)} \\
<\pi_3> & =& F \sin{(\vec q. \vec r)} \\
<\pi_1> &=& 0 \\
<\pi_2> &=& 0\end{aligned}$$ Note, when $|\vec q |$ goes to zero, we recover the uniform phase 2.
The Dirac Equation in this background is solved in [@ref13] and reduces to $$H \chi(k) = (\vec \alpha . \vec k - \frac{1}{2} \vec q . \vec \alpha \gamma_5 \tau_3 + \beta m) \chi (k) = E(k)\chi(k)$$ where $m=gF $
The extra term has been recast in terms of the relativistic spin operator, $\vec \alpha \gamma_5$ . It is evident that if spin is parallel to $ \vec q$ and $ \tau_3 = +1$ (up quark) this term is negative and if $\tau_3 = -1 $ (down quark) it is positive. For spin antiparallel to $\vec q$ the signs for $\tau_3 = +1$ and $-1$ are reversed.
The spectrum for the hamiltonian is the quasi particle spectrum and can be found to be $$\begin{aligned}
E_{(-)}(k) &=& \sqrt{ m^2 + k^2 +\frac{1}{4}q^2 -\sqrt{m^2q^2 +
(\vec q.\vec k )^2}} \\
E_{(+)}(k) &=& \sqrt{m^2 + k^2 + \frac{1}{4}q^2 + \sqrt{m^2q^2 +
(\vec q.\vec k)^2}} \end{aligned}$$ The lower energy eigenvalue $ E_{(-)}$ has spin along $ \vec q$ for $ \tau_3 =1$, or has spin opposite to $\vec q$ for $\tau_3 =-1$. The higher energy eigenvalue $E_{(+)}$ has spin along $\vec q$ and $\tau_3 = -1$, or has spin opposite to $\vec q $ and $ \tau_3 = +1$.
In this background the fermi sea in no longer degenerate in spin but gets polarized into the states above. The quasi particles are, however, states of isospin. We describe matter at a given fermi energy of u and d quarks set by their respective densities and by charge neutrality (corresponding to say neutron like matter).
First we fill up all the lower energy, $E_{(-)}(k)$, states and then we have a gap and start filling up the $E_{(+)}(k)$ states till we get to $E_{F}^i$ ,the fermi energy corresponding to a given density for each flavour.
$$\begin{aligned}
\rho_{i} &=&\frac{1}{(2\pi)^3}\gamma (\int d^3k\Theta( E_F^i- E_{(-)}(k)) +
\int d^3k \Theta( E_F^i- E_{(+)}(k))) \\
\rho_{B} &=& (\rho_u + \rho_d)/3 \\
\epsilon_i &=& \frac{1}{(2\pi)^3}\gamma (\int d^3k E_{(-)}(k)
\Theta( E_F^i- E_{(-)}(k)) +
\int d^3k E_{(+)}(k) \Theta( E_F^i- E_{(+)}(k))) \\
\epsilon_\rho &=& \epsilon_u + \epsilon_d + \frac{1}{2} F^2 q^2 + \frac{\lambda^2}{4}( F^2 - f_{\pi}^2)^2\end{aligned}$$
we can now write down the equation of state as in Ref. 13. It is found that the PC state is always lower in energy then the uniform phase 2. For the explicit numbers and figures we refer the reader to [@ref13].
We briefly remark on some features of this phase:
1. The 2 flavour PC state is quite different from the uniform phase: unlike the 2 flavour CRQM states considered in [@ref2], it cannot be recovered from 3 flavour CRQM by taking the strange quark mass to infinity. As we shall see in the next sections, this gives a new feature - a maximum strange current quark mass for SQM to be the true ground state.
2. The reason that the PC phase has energy lower than the uniform $<\sigma>$ condensate is perhaps best understood in the language of quarks and anti quarks. To make a condenste a quark and antiquark must make a bound state and condense. For a uniform $<\sigma>$ condensate the $q$ and $ \bar{q}$ must have equal and opposite momentum. Therefore, as the quark density goes up the system can only couple a quark with $k > k_f$ and a $ \bar{q}$ with the opposite momentum . This costs much energy so the condensate can only occur if $k_f$ is small, at low density. On the other hand, the pion condensed state is not uniform. So at finite density, if we take a quark with $k = k_f$ the $\bar{q}$ can have momentum $k = |\vec k_f -\vec q |$, which is a much smaller energy cost.
3. Since the pion condensate is a chirally broken phase, the chiral restoration shifts from very low density in the uniform phase to very high density: $\sim 10 \rho_{nuc}$. This is a signature of this phase.
4. Since this phase is always lower in energy than the uniform phase we go directly from the nucleonic phase to the PC phase completely bypassing the uniform phase, and thus all the interesting features and conjectures for the uniform phase are never realized.
5. Another feature of this $\vec\pi$ condensate is that since we have a spin isospin polarization we can get a net magnetic moment in the ground state.
The three flavour state
========================
The extension of the above to 3 flavours or SU(3) chiral symmetry needs some clarification.
The generalized Dirac Equation for the SU(3) case is considerably more complicated and involves a singlet $ \xi_0$ and an SU(3) octet $ \xi_a$ of scalar fields and a singlet $ \phi_0$ and an SU(3) octet $ \phi_a$ of pseudoscalar fields, that interact with the quarks as shown in [@ref15].
$$H \psi(k) = (-i\vec \alpha .\vec \partial -
g \beta(\sqrt{2/3}( \xi_0 +i\phi_0 \gamma_5) +
\lambda^a( \xi_a +i\phi_a \gamma_5))) \psi = E\psi$$
In the chiral limit, the spontaneous symmetry breaking pattern is not unique. We choose the pattern in which the $SU(3)_L \times SU(3)_R$ chiral symmetry breaks down to a vector SU(3). For the uniform case, we have $$\begin{aligned}
<\xi_0> &=& \sqrt{3/2} f_{\pi}\\
<\xi_a> &=& 0 \\
<\phi_0> &=& 0 \\
<\phi_a> &=& 0 \end{aligned}$$ This gives a constituent mass $m = gf_\pi$ for all (u, d and s) quarks. The explicit symmetry breaking strange quark mass term with mass $ m_s$, is then added to $H$. The strange quark mass, $M_s$, then turns out to be the sum of the constituent and explicit mass, $M_s = gf_\pi + m_s$.
The three flavour Pion Condensed phase
--------------------------------------
For describing strange quark matter we use the 3 flavour Pion Condensed state. This is a more versatile state than the one used in [@ref2] (3 flavour CRQM), the latter being in a subset of the former.
Next, we formulate the symmetry breaking in the presence of the pion condensate. This is given as follows: $$\begin{aligned}
<\xi_0> &=& \sqrt{3/2} F(1 + 2\cos{(\vec q.\vec r)})/3 \\
<\xi_8> &=&- \sqrt{3} F(1 - \cos{(\vec q.\vec r)})/3 \\
<\phi_0> &=& 0 \\
<\phi_3> &=& F (\sin{(\vec q.\vec r)}) \end{aligned}$$ and all other fields have expectation value zero.
This gives exactly the PC hamiltonian equation for the u,d sector and yields a simple mass relation above for the strange quark: $ M_s = gF + m_s $; when $q=0$ and $m_s=0$ we recover the chiral limit above.
We may now simply add the 2 flavour PC results for the energy density and density derived above to the strange quark energy density which arises from the single particle relation, $$E_s = \sqrt{ M_s^2 + k^2}$$
The strange quark energy density is given by Baym [@ref16], eq. (8.20) $$\epsilon_s = \frac{3}{\pi^2 8}M_s^4 (x_s n_s( 2 x_s^2 + 1) -
ln(x_s + n_s))$$ where $x_s = k_s^f/M_s$ and $n_s = \sqrt{ 1 + x_s^2}$; $k_s^f$ is the fermi momentum for the strange quarks.
The total energy density of the quarks for the 3 flavour PC is given by $$\epsilon_\rho = \epsilon_u + \epsilon_d + \epsilon_s +
\frac{1}{2} F^2 q^2 +
\frac{\lambda_1^2}{4}(F^2 - f_\pi^2)^2$$ From the effective potential given in [@ref15] for the SU(3) case, there is an extra factor of 3/2 that multiplies the last term. This can be absorbed, as we have done, by a redefinition: $ \lambda_1 = A \lambda $, where $A = \sqrt{3/2}$.
$\beta$ equilibrium in the PC phase
-----------------------------------
We have the following general chemical potential relations for quark matter $$\begin{aligned}
E^u_F &=& \mu_ u \\
E^d_F &=& \mu_ d = \mu_ s \\
\mu_ e &=& \mu_ d - \mu_ u \\
n_e &=& \frac{\mu_e^3}{3\pi^2} \end{aligned}$$ The charge neutrality condition, below, further reduces the number of independent chemical potentials to one. $$\frac{2 n_u (\mu_u, q , F) - n_d (\mu_d, q ,F)
- n_s (\mu_s)}{3} -n_e = 0$$ The baryon density is $$\begin{aligned}
\rho_{B} &=& \frac{n_u (\mu_u, q , F) + n_d (\mu_d, q ,F)
+ n_s (\mu_s)}{3} \\
n_s &=& (k_s^f)^3/(\pi^2)\end{aligned}$$ For matter in $\beta$ equilibrium we need to add the electron energy density to the quark energy density above $$\epsilon_e = (1/4 \pi^2) \mu_e^4$$ The total energy density is $$\epsilon = \epsilon_\rho + \epsilon_e$$ The energy per baryon, $E_B = \epsilon / \rho_B$, then follows.
For the pion condensed state, the ground state energy and the baryon density depend on the variational parameters, the order parameter or the expectation value, $F = \sqrt{ <\vec \pi>^2 + < \sigma>^2 }$ and the condensate momentum, $|\vec q|$. To define the free energy at a fixed baryon density then requires some care. However, we are only interested in the absolute mininmum of the energy per baryon, $E_B$, for all density. We can then simply minimize $E_B$ with respect the variational parameters, $F$ and $|\vec q|$ and the one independent chemical potential to get the absolute minimum. Note that this is for a given a value of $$B = \lambda_1^2 ( f_\pi)^4 /4$$
Results for the mean field theory 3 flavour PC state (PCSQM)
============================================================
PCSQM without one gluon exchange
--------------------------------
The new window for SQM is established thus:
We maintain the minimum permissible limit on $E_B$ for 2 flavour quark matter to be 934 MeV. Since the PC is a lower energy state than the chirally restored QM (CRQM) considered in Ref. 2 we find that the lower bound on $B^{1/4}$ goes up, closing the window on SQM. The results are given in table \[table1\]. This lower bound for 2 flavour PC is found to be $$B^{1/4}_< = 148 \mbox{\rm ~MeV}$$ whereas, for the case of 2 Flavour CRQM considered by Farhi and Jaffe, it was $$B^{1/4}_< = 145 \mbox{\rm ~MeV}$$
We then calculate the minimum $E_B$ for 3 flavour QM. For this we use our generalised PCSQM as the state. It is important to note that the rather particular 3 flavour CRQM lies within its variational reach. This provides us with the upper bound on the bag pressure $B_>$.
.
----- ----------- --------------- ----------------------- --------------
$B^{1/4}$ $\mu_{\rm u}$ $n_{\rm s}/n_{\rm u}$ $< \sigma >$
(MeV) (MeV) (MeV)
0 162.7 309 0.96 9.55
50 161.4 307 0.96 1.45
100 158.9 304 0.92 0.01
150 154.9 298 0.80 0.01
200 150.6 290 0.63 0.01
250 147.7 260 0.00 26.31
----- ----------- --------------- ----------------------- --------------
: \[table1\] Ground states with energy of 930 MeV/nucleon for 3-flavour quark matter with 2-flavour Pion-condensed state. $m_s$ is the assumed strange quark mass, $B$ stands for Bag Pressure, $\mu_{\rm u}$ the u-quark chemical potential, $n_{\rm s}/n_{\rm u}$ the ratio of the density of strange quarks to that of u-quarks and $< \sigma >$ is the expectation value of the $\sigma$ field. These results are without 1-gluon exchange
The maximum upper bound on $B$ naturally occurs for the $m_s = 0$ case and is found to be almost the same as in Ref. 2. $$B^{1/4}_> = 162.5 \mbox{\rm ~MeV}$$ This is because the minimum occurs in the 3 flavour CRQM state, as given in [@ref2], with $F = 0$.
Some new features compared to Ref. 2:
Whereas in [@ref2], the 2 Flavour CRQM threshold is virtually the limit of $m_s \rightarrow \infty$ for 3 flavour CRQM, in our case it is not, as our 2 flavour PC ground state is of lower energy and different. In this case the 3 flavour CRQM becomes of higher energy than the 2 flavour PC ground state at a finite $m_s$.
We thus find the limit $$m_s < 250 \mbox{\rm ~MeV}$$ for SQM to exist as the ground state, simply from the constraint on 2 flavour QM.
We also find that for this limiting $m_s$, the absolute minimum of $E_B$ is the 2 flavour PC state with zero strange quark density. But, for masses somewhat below the limiting mass (with the condition that SQM be the actual ground state), the absolute minimum occurs in the 3 flavour CRQM state as given in [@ref2], with $F=0$. A summary of the results appears in Fig. \[figure1\].
Results for the three flavour Pion Condensed phase (with 1 gluon exchange interaction and $\alpha_{QCD} = 0.6$)
---------------------------------------------------------------------------------------------------------------
We next consider the case in which 1 gluon exchange interaction is included, following Farhi and Jaffe. This is with a view to estimating the effects of including such ‘perturbative’ interactions. No attempt will be made at rigour. Our approximate scheme for this case is best regarded as an estimate.
One reason it is difficult to do an analytic calculation of the interaction energy for the PC is that the quark propagators in the presence of the pion condensate [@ref13] are far more complicated than in the case of free fermi sea quarks.
We first note that in the limit of the condensate, $F \rightarrow 0$, all the PC results go smoothly to the free fermi sea results. We find that the condensate expectation values $F$ in the regime of interest to us are such that the value of $F$ at the minimum makes $m = gF$ fall below the relevant quark chemical potentials. So, as a first approximation, we use the free fermi sea results for the given chemical potential and mass.
The contribution from the 1 gluon exchange interaction to the free energy at given density or the Thermodynamic Potential (TP), including renormalization group corrections, (a), is given in [@ref2]. The expression for the 1 gluon exchange or interaction energy, for free fermi seas of quarks, (b), is given in Baym [@ref16] (eq. 8.20). Though we expect the interaction contribution (in the absence of renormalization group corrections) to both (a) and (b) to be the same, we find that the two expressions are different. Results for these two cases are summarized in table \[table2\].
----- ----------- --------------- ----------------------- -------------- ----------- --------------- ----------------------- --------------
$B^{1/4}$ $\mu_{\rm u}$ $n_{\rm s}/n_{\rm u}$ $< \sigma >$ $B^{1/4}$ $\mu_{\rm u}$ $n_{\rm s}/n_{\rm u}$ $< \sigma >$
(MeV) (MeV) (MeV) (MeV) (MeV) (MeV)
0 150.3 274 0.83 22.08 166.1 302 0.69 36.53
50 147.7 274 0.96 0.33 161.6 292 0.54 34.45
100 144.6 280 0.89 0.01 157.2 282 0.28 35.54
150 140.8 244 0.00 32.91 155.8 272 0.00 40.67
200 140.8 244 0.00 32.91 155.8 272 0.00 40.68
250 140.8 244 0.00 32.91 155.8 272 0.00 40.67
----- ----------- --------------- ----------------------- -------------- ----------- --------------- ----------------------- --------------
We briefly give a comparison with the results of the case without the 1 gluon exchange interaction. In our notation the results for (a) are given first (without brackets) and the results for (b) appear alongside in brackets.
The lower bound for 2 flavour PC is found to be $$B^{1/4}_< = 141.5 \; (156.7) \mbox{\rm ~MeV}$$ For (a) this is down, from the case without 1 gluon exchange, where it was 148 MeV. This indicates that gluon exchange is repulsive, even with the constituent quark mass generated by the condensate, whereas for (b) it is up and the 1 gluon exchange is attractive.
We note that for the case of 2 Flavour CRQM (with gluon exchange) considered by Farhi and Jaffe, we find $$B^{1/4}_< = 132 \mbox{\rm ~MeV}$$ is even more down, from the case without 1 gluon exchange, where it was 145 MeV.
This clearly shows that the effect of gluon exchange is more repulsive for this case as the quarks are massless as opposed to the PC case when they have a mass, $m = gF$.
We note that our value, $B^{1/4}_< = 132$ MeV, is more than that given in the figure 1(c) in [@ref2]. This is due to the difference in the way energy density and density are defined for us and in [@ref2]. They begin with the TP to order $\alpha$, derive a density which includes interaction to order $\alpha$, and use this density to define the energy density. The difference between our case and theirs is $O(\alpha^2)$.
The maximum upper bound on $B$ naturally occurs for the $m_s = 0$ case $$B^{1/4}_> = 150.3 \; (166.1) \mbox{\rm ~MeV}$$ Interestingly, in this case the minimum in $E_B$ comes from a [*new ground state*]{} and is genuinely different. It does not occur either in the 2 flavour PC state nor does it occur in the 3 flavour CRQM state as given in Farhi and Jaffe, with $F=0$, as was the case in the absence of 1 gluon interaction. In this case the mininmum is lower than either of the these states and comes from a true merger of the two; it has a non zero value of, $F=22.1 \; (36.5)$ MeV, and also a ratio of the strange quark density to the u quark density of $0.83 \; (0.69)$.
For comparison for the 3 flavour CRQM state of [@ref2] $$B^{1/4}_> = 144.5 MeV$$
We find the maximum allowed limit on $m_s$, with gluon exchange included, for both (a) and (b), moves down to $$m_s < 150 \mbox{\rm ~MeV}$$ for SQM to be the absolute ground state.
We remark that the cases (a) and (b) show the same trends. The difference is that the allowed values of $B$ are shifted up in (b).
Some of these results are summarized in Fig. \[figure2\].
Conclusions
===========
We have found that the existence of a new and lower energy state of 2 flavour quark matter, the pion condensed state, has a significant effect on the window of opportunity for SQM to be the true ground state of matter.
We work with an effective chiral lagrangian. Unlike the MIT bag case [@ref2], where the bag pressure is a parameter, in our formulation it is the chiral condensate energy and is given in terms of the parameters of low energy phenomenology - the pion decay constant, $f_\pi$, which is precisely known and the scalar coupling or the $\sigma$ mass, which is rather poorly known. However, it is of interest that SQM is related to parameters of low energy phenomenology. It requires more detailed work to firm up this connection. Furthermore, we have found a new and interesting constraint on the existence of SQM as the true ground state, that comes from the current mass of the strange quark.
Our finding is that
1. Without including 1 gluon exchange the new PC ground state limits the bounds on the bag pressure $B$, allowing $$148 \mbox{\rm ~MeV} < B^{1/4} < 162.5 \mbox{\rm ~MeV}$$ instead of the result of [@ref2] $$145 \mbox{\rm ~MeV} < B^{1/4} < 162.5 \mbox{\rm ~MeV}$$ and cuts down the allowed pararmeter space of the explicit or current strange quark mass to $$m_s < 250 \mbox{\rm ~MeV}$$
2. With $\alpha_{QCD} = 0.6$ and including 1 gluon exchange the new PC ground state, with some simplifying approximations strongly limits the bounds on the bag pressure, $B$ , allowing $$141.5 (156.7) \mbox{\rm ~MeV} < B^{1/4} < 150.3 (166.1) \mbox{\rm ~MeV}$$ instead of the result of [@ref2]: $$128.5 \mbox{\rm ~MeV} < B^{1/4} < 144.5 \mbox{\rm ~MeV}$$ and it further cuts down the allowed pararmeter space of the explicit strange quark mass to $$m_s < 150 \mbox{\rm ~MeV}$$ This is a rather punishing constraint.
These results are obtained in the 2 flavour chiral limit with some approximations and also assuming that all the bag pressure comes from the chiral condensate. Adding a confinement pressure will raise the energy, $E_B$, for SQM and may shrink the window further.
This is, by no means, the last word on possible ground states even in this model–for example, we may have a kaon condensate. However, we shall not investigate this here.
Acknowledgements {#acknowledgements .unnumbered}
----------------
We would like to thank W. Broniowoski for sharing with us the program on the Pion Condensed state which is the mainstay of this paper. We genuinely thank Judith Mcgovern for her ever willing help with the intricacies SU(3) chiral model and more. We thank Bob Jaffe for very prompt clarifications on the expressions in their paper, which is the benchmark for this one. We further thank Mike Birse and M. Kutschera. VS thanks the Raman Research Institute for its hospitality.
[99]{} E. Witten, Phys. Rev. D, 30, 272 ( 1984). E. Farhi and R.L. Jaffe, Phys. Rev. D, 30, 2379 ( 1984). C. Alcock and A. Olinto, Ann. Rev. Nucl. Part. Sci., 38, 161 (1988). V. Soni, Mod. Phys. Lett. A, 11, 331 (1996), A. Manohar and H. Georgi, Nucl. Phys. B, 234, 203 (1984) S. Kahana, G. Ripka and V. Soni, Nucl. Phys. A, 415, 351 (1984), M.C. Birse and M.K. Banerjee, Phys. Lett., 134B, 284, (1984). V. Soni, ‘The nucleon and strongly interacting matter’, Invited talk at DAE Symposium in Nuclear Physics, Bombay, Dec 1992 and references therein . M.C. Birse, Soliton Models in Nuclear Physics, Progress in Particle and Nuclear Physics, Vol. 25, 1 (1991), and references therein. see for example, R. Johnson, N.W. Park, J. Schechter, V. Soni and H. Weigel, Phys. Rev. D42 , 2998 (1990), J. Stern and G. Clement, Mod Phys. Lett, A3, 1657 (1988). see for example, J. Stern and G. Clement, Phys. Lett. B, 264, 426 (1991), E.J. Eichten, I. Hinchcliffe and C. Quigg, Fermilab-Pub 91/272 - T. D. Diakonov, V. Petrov, P. Pobylitsa, M. Polyakov and C. Weiss, Nucl. Phys. B, 480, 341 (1996), Phys. Rev. D, 56, 4069 (1997). A. Gocksch, Phys. Rev. Lett. 67, 1701 (1991). V. Soni, Phys. Lett., 152B, 231 (1985). M. Kutschera, W. Broniowski and A. Kotlorz, Nucl. Phys. A, 516 566 (1990) . M. Alford, K. Rajgopal and F. Wilczek, Phys. Lett B, 422, 247 (1998), Nucl. Phys. B, 537, 443 (1999). J.A. McGovern and M.C. Birse, Nucl. Phys. A, 506, 367 (1990) . G. Baym and S. Chin, Phys. Lett., 62B, 241 (1976), G. Baym in NORDITA lectures, ‘Neutron Stars and the Properties of Matter at high density’, (1977).
|
---
abstract: 'We explore an application of all-pay auctions to model trade wars and territorial annexation. Specifically, in the model we consider the expected resource, production, and aggressive (military/tariff) power are public information, but actual resource levels are private knowledge. We consider the resource transfer at the end of such a competition which deprives the weaker country of some fraction of its original resources. In particular, we derive the quasi-equilibria strategies for two country conflicts under different scenarios. This work is relevant for the ongoing US-China trade war, and the recent Russian capture of Crimea, as well as historical and future conflicts.'
author:
- 'Benjamin Kang,'
- James Unwin
title: 'All-Pay Auctions as Models for Trade Wars and Military Annexation'
---
1. Introduction
===============
Auctions have long been used as mathematical models of competition [@Milgrom; @Krishna; @Capen; @Cassady; @Myerson; @Vickrey; @Wilson]. A setting useful for certain competitions is the ‘all pay’ auction, in which losing bidders are required to pay a forfeit [@Kang; @Krishna]. International military conflicts provide perhaps the ultimate form of competition, and here we employ all pay auctions as a model of military territorial expansions and trade wars. We are unaware of any existing auction models of trade wars, although this is quite a natural description, and there are also few game theoretic analyses (although see e.g. [@Harrison]).
Building on earlier work due to Hodler and Yektaş [@Hodler] in which an all-pay auction model is used to model total military conquest, we expand this framework to study the case of partial military conquest in which the victor takes only a fraction of the loser’s resources, i.e. the victor expands their territory at the expense of the loser. This generalization is mathematically non-trivial and is more relevant to real world events, such as the Russian annexation of the Crimean Peninsula and the Israeli annexation of the Golan Heights. We also offer an reinterpretation of this framework in the context of trade wars.
This paper is structured as follows: In Section 2 we introduce the mathematical set-up of Hodler and Yektaş [@Hodler], and discuss how this model can be generalised to explore partial resource losses following a conflict between two countries. We formulate this scenario in terms of an all-pay auction with payoff function $W$. In Section 3 we discuss the mutual best responses for each country without incorporating constraints and then subsequently outline the conditions for quasi-equilibria. In contrast to [@Hodler], obtaining the quasi-equilibria is mathematically non-trivial and is the focus of Sections 4 & 5. We first solve the relevant equations exactly for the special case that neither country has a competitive edge in Section 4. Then, in Section 5, we find an approximate solution for the quasi equilibrium strategies for the more general case. Section 6 provides a discussion of our results and an interpretation of the mathematical framework in terms of both conventional conflicts and trade wars.
2. Fractional Resource Forfeiture
=================================
Emulating [@Hodler], we characterize the two countries (labelled by $i = 1, 2$) by their aggressive (e.g. military) power $\tilde{\lambda_i} \in \mathbb{R^+}$, production level $\tilde{\beta_i} \in \mathbb{R^+}$, and expected resource level $R_i \in \mathbb{R^+}$. While the values of these three variables are public knowledge, a private variable $r_i \in [0,1]$ is introduced such that the actual resource endowment of a country $2r_iR_i$ is private information known only to a given country. The private variables $r_i$ are publicly known to be distributed uniformly.
Furthermore, we introduce the notation $\lambda_i = 2R_1\tilde{\lambda_i}$ and $\beta_i = 2R_1\tilde{\beta_i}$ which corresponds to the maximum potential aggression and production, respectively. We also define the ratio between countries of these quantities = = . It can be assumed without loss of generality that $\beta \leq \lambda$; implying Country $1$ has an advantage in aggressive potential, while Country $2$ has an advantage in production.
Suppose that $b_i$ is the resource allocation, such that $b_i\lambda_i$ is allocated to aggressive actions and $(r_i-b_i)\beta_i$ is allocated to production and define the equilibrium strategies $f_i(r_i) = b_i$ such that $f_i(r_i) :[0,1] \rightarrow [0,r_i]$. It will be useful to note the follow lemma from [@Hodler]:
For a function $f_i(r_i)\in[0,r_i]$, in any monotone equilibrium then $f_1(0) =f_2(0) = 0$. Further, $f_1$ and $f_2$ are non-decreasing and $\lambda f_1(1) = f_2(1)$. \[l1\]
It is assumed that the country with more aggressive potential will win the competition, acquiring a fraction of resources $\alpha \in [0,1] $ from the losing party, while also retaining all of their initial resource endowment. This can be encapsulated in the following payoff function W\_i =
\_i(r\_i-b\_i) & \_ib\_i < \_jb\_j\
\_i(r\_i-b\_i) & \_ib\_i = \_jb\_j\
\_i(r\_i-b\_i) + (1-)\_j(r\_j-b\_j) & \_ib\_i > \_jb\_j
.\[WWW\] Note that taking $\alpha=1$ recovers the case studied in [@Hodler].
3. Equilibrium Strategies
=========================
Given the form of $W_i$ it follows that the payoff for Country $1$ making an optimal bid $y=f(r_1)$ is given by u\_1(y)=&\_0\^[f\_2\^[-1]{}(y)]{}r\_2\
&+ \_[f\_2\^[-1]{}(y)]{}\^1r\_2. One can formulate each countries mutual best response:
Disregarding any constraints, the mutual best responses for each party is to follow strategies of the form \[eq:8\] f\_1(r\_1) = &((1-)r\_1+)\^\
&+r\_1-\
&+ K\_1((1-)r\_1+)\^[-(1+)]{},\
f\_2(r\_2) = &((1-)r\_2+)\^\
&+r\_2-\
&+ K\_2((1-)r\_2+)\^[-(1+)]{} , where $K_0$, $K_1$, and $K_2$ are arbitrary constants.
To identify the value of $y$ which maximizes the payoff function $u_1(y)$ for Country 1, we set the derivative with respect to $y$ to zero to obtain the condition \[eq:1\] F(y) =f\_2\^[-1]{}(y) + .& where $F(y)=\beta(f_1^{-1}(y)-y)+f_2^{-1}(\lambda y)-\lambda y$. Writing Country $2$’s bid similarly as $y = f_2(r_2)$ to maximise the payoff $y$ should satisfy \[eq:2\] F(y) =f\_1\^[-1]{}(y) + .& Since the factor $F(y)$ appears in both eqns. & we can substitute for one of these factors, leading to ()’ = ()’. Integrating, it follows that \[xd\](1-)f\_2\^[-1]{}(y) + = K\_0((1-)f\_1\^[-1]{}(y) + )\^ , where $K_0$ is a constant. To proceed we identify that Country 1’s optimal bid $f_1(r_1) = y$ is related to Country 2’s optimal bid $f_2(r_2) =z$ by $y = z/\lambda$.
Then substituting eq. back into eq. and eq. , and also substituting for $x$ and $y$, we obtain the following two equations: \[eq:6\] = &K\_0((1-)r\_1+)\^[-1]{}-\
& - +,\
= &K\_0\^[-]{}((1-)r\_2+)\^[-1]{}-\
&- +. Solving the above first order differential equations for $f_1$ and $f_2$ yields the stated result.
To find the quasi-equilibrium strategies of each player, we apply the boundary conditions below (following [@Hodler]) & f\_1(0)=f\_2(0)=0\
& f\_1(1)=f\_2(1)/ . \[bcbc\] These conditions are appropriate to the problem due to the properties highlighted in Lemma \[l1\].
These boundary conditions lead to three equations involving three unknown constants $K_0,K_1,K_2$. Solving the two conditions at $r_i=0$ for $K_1$ and $K_2$ in terms of $K_0$, the final boundary condition $f_1(1)=f_2(1)/\lambda$ can be expressed in the following form \[eq:13\] a K\_0+b K\_0\^[-]{} & =c, where \[abc\] a&= ,\
b&= ,\
c&=-+\
& + - . Unfortunately, eq. does not a have an exact general solution, but as a first approach in the next section we will investigate the case $\lambda = \beta$ which is exactly solvable.
4. Quasi-equilibria for the case $\lambda = \beta$
===================================================
The case $\lambda = \beta$ implies $\lambda_1/\beta_1=\lambda_2/\beta_2$ and represents the scenario in which neither country has a comparative advantage in aggressive power for a given level of production. Note that $\lambda = \beta=1$ would imply that both countries are equal in both aggressive power and production level, and thus neither country has any advantage.
The quasi-equilibrium strategies for each country for $\lambda = \beta$ and $r_i \in [0,1]$ are &f\_1(r\_1) = r\_1 -+ ,\
&f\_2(r\_2) = r\_2-+ .
For $\lambda = \beta$ the exponent of $K_0^{-\frac{\lambda}{\beta}}$ is simply $-1$ and the coefficients of eq. simplify to a&= , b= ,\
c&=+(1+)(1-) . Therefore, eq. simplifies to \[eq:14\] ( )(K\_0\^2 -(1-)K\_0 - ) =0, which has solutions $K_0 = 1, -\beta$. Since $\beta\geq0$ and from eq. it follows $K_0\geq 0$, we hence take $K_0 = 1$. Then evaluating the forms of $K_1$ and $K_2$ (obtained through the conditions $f_1(0) = f_2(0) = 0$) we obtain \[eq:15\] K\_1 = , K\_2 = . Substituting $K_0=1,$ and the forms for $K_1$ and $K_2$ above, into the expressions of Lemma \[l1\] completes the proof.
The quasi-equilibrium strategies for each country for $\lambda = \beta$ are related via f\_2(r\_i)=f\_1(r\_i) .
This result implies that whichever country has the absolute advantage in both aggressive power and production is expected to win the trade war. Moreover, for the sub-case $\lambda = \beta=1,$ in which the two countries have equal strength in both aggressive power and production, the equilibrium strategies are identical.
5. Quasi-equilibria for the case $\lambda \neq \beta$
=====================================================
An analysis of $\lambda \neq \beta$ requires a little care, since for $\lambda\neq\beta$ the equations exhibit singularities for $r_i\rightarrow0$. As a result, a minor modification is required to consider the more general case. Specifically, to obtain the equilibrium strategies for the more general case we will restrict the private resource variable to $r_i \in [\epsilon,1]$ for some small positive constant $\epsilon$. The presence of a fixed $\epsilon$ cuts off the divergences and in this case eq. (\[eq:13\]) can be solved. The analysis however is substantially more complicated than the earlier case studied here (and the study in [@Hodler]).
Introducing this infinitesimal $\epsilon$, we adapt the boundary conditions of eq. (\[bcbc\]) to the following f\_1()=f\_2()=0, f\_1(1)=f\_2(1)/ . Taking eq. (\[eq:8\]) of Lemma 2 and evaluating $f_1$ at $\epsilon$ gives \[eq:11\] f\_1()&=\^ +\
&- + K\_1\^[-(1+)]{}. Taking first the boundary condition $f_1(\epsilon)=0$ and solving for $K_1$ at first order in the infinitesimal quantity $\epsilon$ implies K\_1=. \[10\] Then applying the next condition $f_2(\epsilon)=0$ and solving for $K_2$ at first order in $\epsilon$ gives K\_2=. \[11\] The final boundary condition $f_1(1)=f_2(1)/\lambda$ requires & +- + K\_1\
= & +- + . Substituting the forms of $K_1$ and $K_2$ into the above and rearranging, we arrive at the condition \[xx\] & K\_0 = , where $C_1$, $C_2$ and $C_3$ are parameter dependent constants C\_1&= (1+ ),\
C\_2&=(1 + ),\
C\_3&=( -)\
&+ +\
& - - . Solving eq. (\[xx\]) analytically for $K_0$ is challenging, however an approximate solution can be found via iterative methods, as we show below.
To proceed we take the zeroth order approximation to be $K_0^{(0)}=1$ (the value found in Theorem 1). The first order value $K_0^{(1)}$ is found by taking $K_0^{(0)}\approx1$ on the RHS of eq. (\[xx\]), implying $K_0^{(1)}\approx\frac{C_3- C_1}{C_2}$. Then an approximate solution for $K_0$ is provided by the second order iteration $K_0^{(2)}$ found by substituting $K_0^{(1)}$ into the RHS of eq. (\[xx\]) to obtain K\_0\^[(2)]{} - ()\^[-]{} . Finally, using this second order approximate expression $K_0^{(2)}$, along with the exact forms of $K_1$ and $K_2$ obtained in eqns. (\[10\]) & (\[11\]) in eq. (\[eq:8\]) of Lemma 2, we derive the approximate equilibrium strategies for the more general case with $\lambda \neq \beta$. Specifically, we obtain the following result:
The quasi-equilibrium strategies for each country for $r_i \in [\epsilon,1]$ to leading order in $\epsilon$ and to second order precision in $K_0$ are given by eq. (\[yy\]).
\[yy\] f\_1(r\_1) & ( - ()\^[-]{}) +r\_1-\
&+ ((1-)r\_1+)\^[-(1+)]{},\
f\_2(r\_2) &( - ()\^[-]{})\^[-]{} +r\_2-\
&+ ((1-)r\_2+)\^[-(1+)]{} .
6. Discussion
=============
Auction theory has previously been used to model competition between animals [@Bishop], countries [@ONeill; @Hodler] and individuals or groups [@Baye; @Siegel; @Snyder]. Moreover, for game theoretic approaches to military conflicts see [@Haavelmo; @Garfinkel; @Hirshleifer; @Skaperdas; @Grossman]. Here we have extended the earlier work of Hodler and Yektaş [@Hodler], on forfeiture due to conflicts, to include the important case of partial resource losses. In our analysis the fraction of resources ceded by the loser is controlled by the variable $\alpha\in [0,1] $. Note that the variable $\alpha$ measures loss of resources, thus in the context of military conquest $\alpha$ need not strictly measure captured land area, since resources (oil, minerals, etc.) may not be distributed uniformly.
Notably, one of the more novel and topical applications of the all pay auction model developed here is to model trade wars. Trade wars occur when two countries create tariffs in response to the other country. Each country is looking to limit their opponent economically, however this also typically harms their domestic consumers, as the price of certain items will increase due to lack of imported materials and thus creation of tariffs entails a cost. The original interpretation put forward in [@Hodler] was that $\tilde{\lambda_i}$ corresponded to military power and the forfeit for the losing country was a total loss of resources. The framework presented here can be reinterpreted as a model of trade wars if the variable $\tilde{\lambda_i}$ is taken to be the ability of a country to introduce tariffs, while $\alpha$ is interpreted as an economic loss (e.g. market share) relative to competitors. In the context of trade wars, Lemma 3 gives the exact equilibrium strategies when neither country has a comparative advantage for a given level of production, and Lemma 6 gives the approximate solution for countries with dissimilar tariffs or production rates.
We note that this analysis assumes net-zero economic loss between the two countries. This simplifying assumption could be relaxed by modifying the $(1 - \alpha)$ term in eq. (\[WWW\]) to include an additional $\kappa\in[0,1]$ variable which accounts for loss to third party competitors who are not otherwise involved in the trade war, such that $\alpha\in[0,\kappa]$.
Trade wars have become an important element of current world economics. However, there are few theoretical analyses of trade wars and this work, in the context of auction theory, adds a new aspect to the literature.
This research was undertaken as part of the MIT-PRIMES program. JU is grateful for support from the National Science Foundation through grant DMS-1440140 while at the Mathematics Sciences Research Institute, Berkeley during Fall 2019.
P. Milgrom and R. Weber, [*A theory of auctions and competitive bidding.*]{} Econometrica (1982): 1089-1122.
E. Capen, R. Clapp, and W. M. Campbell. [*Competitive bidding in high-risk situations.*]{} Journal of petroleum technology 23.06 (1971): 641-653.
R. Cassady, [*Auctions and auctioneering.*]{} University of California Press, 1967.
R. B. Myerson, [*Optimal auction design.*]{} Mathematics of operations research 6.1 (1981): 58-73.
W. Vickrey, [*Counterspeculation, auctions, and competitive sealed tenders.*]{} The Journal of finance 16.1 (1961) 8. R. Wilson, [*A bidding model of perfect competition.*]{} The Review of Economic Studies 44.3 (1977): 511.
V. Krishna and J. Morgan, [*An analysis of the war of attrition and the all-pay auction.*]{} Journal of Economic Theory 72.2 (1997): 343-362.
B. Kang and J. Unwin, [*All-pay auctions with different forfeits,*]{} [arXiv:2002.02599 \[econ.TH\]](https://arxiv.org/abs/2002.02599), to appear in 2019 Yau High School Awards, International Press (Somerville, MA).
G. Harrison and E. Rutstrom. [*Trade wars, trade negotiations and applied game theory.*]{} The Economic Journal 101.406 (1991): 420-435.
R. Hodler and H. Yektaş, [*All-pay war.*]{} Games and Economic Behavior 74 (2012): 526-540.
D. T. Bishop, C. Canning, and J. Maynard Smith, [*The war of attrition with random rewards.*]{} J. Theoretical Biol. 74 (1978): 377-388.
B. O’Neill, [*International escalation and the dollar auction.*]{} J. Conflict Resolution 30 (1986): 33-50.
M. R. Baye, D. Kovenock, and C. G. de Vries, [*Rigging the lobbying process: An application of the all-pay auction.*]{} The American Economic Review. 83 (1993): 289-294.
R. Siegel, [*Asymmetric all-pay auctions with interdependent valuations*]{}, Journal of Economic Theory 153 (2014) 684. J. M. Snyder, [*Election goals and the allocation of campaign resources.*]{} Econometrica (1989): 637-660.
T. Haavelmo, [*A study in the theory of economic evolution*]{}, Vol. 3. Amsterdam: North-Holland Publishing Company, 1954.
M. Garfinkel, [*Arming as a strategic investment in a cooperative equilibrium*]{}, The American Economic Review (1990): 50-68.
J. Hirshleifer, [*The dark side of the force: Economic foundations of conflict theory*]{}, Cambridge University Press, 2001.
M. Garfinkel, S. Skaperdas. [*Economics of conflict: An overview*]{}, Handbook of defense economics 2 (2007): 649. H. Grossman, [*A general equilibrium model of insurrections*]{}, The American Economic Review (1991): 912-921.
|
---
abstract: 'Using a parametric approach, we determine the configuration of super-AGB stars at the explosion as a function of the initial mass and metallicity, in order to verify if the EC-SN scenario involving a super-AGB star is compatible with the observations regarding SN2008ha and SN2008S. The results show that both the SNe can be explained in terms of EC-SNe from super-AGB progenitors having a different configuration at the collapse. The impact of these results on the interpretation of other sub-luminous SNe is also discussed.'
author:
- 'M. L. Pumo, M. Turatto, M. T. Botticella, A. Pastorello, S. Valenti, L. Zampieri, S. Benetti, E. Cappellaro, F. Patat'
date: 'Received ... ; accepted ...'
title: 'EC-SNe from super-AGB progenitors: theoretical models vs. observations'
---
Introduction {#sec:intro}
============
It is widely accepted that there are two main explosion mechanisms leading to supernova (SN) events [e.g. @woosley86; @hille00]: the thermonuclear runaway in white dwarfs approaching the Chandrasekhar mass and the core collapse of stars with initial mass $\gtrsim
8$(CC-SNe). From an observational point of view, the former mechanism originates the relatively homogeneous type Ia SNe. The latter produces a huge variety of displays (different energetics and amounts of ejected material, reflecting on heterogeneous light curves and spectra evolutions), which are thought to be linked to the possible interaction of the CC-SN ejecta with circumstellar material (CSM) and the different configuration of the CC-SN progenitor at the time of the explosion [e.g. @filip97; @hamuy03; @turatto03; @turatto07].
However the exact nature of CC-SN progenitors (initial mass, stellar structure and composition at the explosion) and the type of collapse (iron core collapse or neon-oxygen core collapse triggered by electron captures) are far from being well-established. There are still ambiguities that arise, on the theoretical side, from the uncertainties in modelling stellar evolution and the explosion mechanism [e.g. @woosley02; @heger03] and, on the observational side, from the sparse direct detections of progenitor stars and a controversial classification of some events [e.g. @smartt08; @turatto07].
A number of faint transients have been recently discovered whose nature is still ambiguous and extensively debated. In particular SN 2008S received a SN designation by @stan08, but @steele08 considered it as a SN “impostor”, and @smith08 as the exotic eruption of a luminous blue variable (LBV) object with a relatively low-mass, highly obscured progenitor ($\lesssim 15$). An eruptive origin was invoked also for two other similar transients [M 85 OT2006-1 and NGC 300 OT2008-1; @kulk07; @Berger09; @bond09]. However, work based on multi-wavelength follow-up of the transients and mid-IR images analysis of the pre-explosion environments not only did not rule out a SN origin [@pastorello07; @Prieto08] but even suggested that they may be CC-SNe triggered by electron-capture reactions (so-called EC-SNe) involving a super-asymptotic giant branch (super-AGB) star [@thompson08]. The long-term multiwavelength monitoring of the SN2008S and new comparisons with the two aforementioned transients seem to support the EC-SN interpretation [@bot09 B09 hereafter]. In particular B09 favor a scenario where the SN2008S progenitor is a super-AGB star embedded in an optically thick circumstellar shell. This conclusion is based on (1) the fact that the pre-explosion luminosity of the progenitor is in plausible agreement with the super-AGB models, (2) the capability of super-AGB stars to form thick circumstellar shells through mass-loss during the thermal pulses phase, (3) the similarity in the total radiated energy of SN2008S with that of other faint SNe, (4) the moderate velocities ($\sim 3000$) of the ejecta, and (5) a low but significant mass ($0.0014 \pm 0.0003$) of ejected $^{56}$Ni.
An EC-SN explanation has been suggested also for SN2008ha [@val09 V09 hereafter], although an iron CC-SN involving a massive star (initial mass $\gtrsim 25-30\,M_{\odot}$) plus fallback onto the collapsed remnant can not be ruled out. At first this object was included among the SN2002cx-like variety of peculiar type Ia SNe [@Li03; @jha06; @phillips07], but V09 reviewed the thermonuclear scenario on the basis of the photometric and spectroscopic similarity to low-luminosity CC-SNe, concluding that all SN 2002cx-like objects could be indeed faint, stripped-envelope CC-SNe and that SN2008ha represents the faint tail in the luminosity distribution of this SN family. However, @foley09 did not definitively rule out the thermonuclear origin of the SN2002cx-like objects, and proposed to explain the peculiarity of SN2008ha in terms of an “accretion-induced” collapse [so-called AIC mechanism; see @metzger09 for details].
So far no clear picture has emerged and different scenarios may explain the aforementioned faint transients, especially because a detailed scrutiny of the super-AGB progenitors configuration at the explosion, which is crucial for a comparison with SN observables, is still missing. In the light of the most recent super-AGB stars models (e.g. @sp06, SP06 hereafter; @thesis, P06 hereafter; @s07, S07 hereafter; @Poel08), in this Letter we discuss in detail the possible link between EC-SNe from super-AGB progenitors and these transients using a parametric approach. After a brief sketch of the super-AGB stars evolution, we determine their configuration at the explosion as a function of the initial mass and metallicity from the most recent grids of super-AGB stellar models, and then we investigate if such a configuration is compatible with the observations.
Evolution and final fate of super-AGB stars {#sec:sagb}
===========================================
Current view on stellar evolution reveals the existence of two critical initial masses $M_{up}$, defined as the minimum initial mass above which C-burning ignites, and $M_{mas}$, corresponding to the minimum initial mass for the completion of all the nuclear burning phases leading to an iron core collapse [e.g. @woosley02]. So-called super-AGB stars fill the gap between $M_{up}$ and $M_{mas}$. After the central He-burning, these stars develop partially degenerate CO cores, which are massive enough to exceed the threshold value for C-burning ignition, that transforms the core into a degenerate NeO mixture (e.g. P06; S07). The gravothermal energy released by the core contraction after the central He-exhaustion induces the occurrence of the second dredge-up or dredge-out phenomenon [e.g. @ritosa99], which increases the He abundance in the envelope up to $Y \sim 0.38$ (see e.g. Fig. 1 in @pumo08, but also SP06 and S07).
When C-burning stops in the core, the physical conditions are not suitable for the Ne-ignition and the structure of super-AGB stars becomes very similar to that of AGB stars, having an inert core surrounded by a He- and H-burning shell. Thus super-AGBs can be considered high-mass equivalent to AGBs and, as such, they may suffer thermal pulses and lose mass as “normal” (but quite massive and luminous) AGB stars [e.g. @pumo08 and references therein]. In this situation the H-free core grows in mass and, if it reaches the Chandrasekhar limit ($M_{CH} \sim 1.37 M_{\odot}$; @n84), EC reactions are activated first on $^{24}$Mg and $^{24}$Na and then on $^{20}$Ne and $^{20}$F. Since these reactions have the effect to release entropy and decrease the electron mole number $Y_e$, O-ignition and core collapse are induced almost simultaneously, and a deflagration front (incinerating the material into a nuclear statistical equilibrium composition) forms when the central density reaches $2.5\cdot10^{10}$ g$\cdot$cm$^{-3}$ [e.g. @Hillebrandt84]. However the O-deflagration is too “weak” to avoid the core collapse, so it proceeds up to neutron star density [see @Miyaji80; @n84 for details]. This mechanism, leaving a neutron star as remnant, presents distinctive features [e.g. @kitaura06; @wanajo09]: low explosion energy ($\sim 10^{50}$erg), large Ni/Fe ratio ($\simeq1$-$2$) and ejection of small amount of $^{56}$Ni (between $\sim 0.002$ and $\sim0.004$).
Whether or not the stellar core reaches the threshold value $M_{CH}$ for triggering the EC-SN, depends on the interplay between mass loss and core growth [e.g. @woosley02; @h05]. If mass loss is high enough, the envelope is lost before the core can reach $M_{CH}$, and the endpoint of super-AGB evolution is a NeO white dwarf (WD). On the contrary, if the mass loss is not so efficient, the super-AGB star evolves into an EC-SN. The critical initial mass setting the transition between super-AGBs that evolve into a NeO WD and the ones that undergo EC-SN is referred to as $M_{N}$.
Recent studies (SP06; P06; S07; @Poel08) have shown that EC-SN channel may exist, but uncertainties in mass loss and core growth rates hamper any conclusions on the exact fraction of super-AGBs evolving into this channel (see Fig. \[fig1\]). So the actual realization of the EC-SN mechanism in super-AGBs should be taken with caution. Nevertheless, it is fair to consider this scenario and its implications.
Outcome of EC-SNe from super-AGB progenitors. {#sec:outcome}
=============================================
As already mentioned in Sect. \[sec:intro\], for the comparison with SN observations it is crucial to know the configurations of the progenitors at the explosion. In fact, as explained below, the least massive super-AGB progenitors (i.e. super-AGBs with initial mass close to $M_{N}$) may have lost almost all their envelope at the time of the explosion, while the most massive ones (i.e. super-AGBs with initial mass close to $M_{mas}$) may still retain a massive ($\sim
8$-$9$) envelopes. Also the CSM can be different with dense shells in proximity of the most massive super-AGBs progenitors and much looser CSM in proximity of the lower-mass progenitors.
These diversities imply that EC-SNe may be observed as relatively faint Type II SNe (IIP or IIL depending on the mass of the H-rich envelope) with presumably low degree of CSM interaction, as Type IIb SNe having stronger interaction with dense, structured and possibly He-enhanced (thanks to the second dredge-up or dredge-out) CSM, up to stripped envelope SNe.
In Tab. \[tabl1\] we summarise the main parameters describing the structure of super-AGB stars of different initial mass and metallicity at the time of explosion. These were built starting with the calculation of the stellar structure at the beginning of the thermally pulsing super-AGB (TP-SAGB) phase from the grids of fully super-AGB stellar models reported in and . Afterwards, the structure at the explosion was calculated, considering that the envelope mass at the explosion may be estimated as follows: $$M_{env}^{EC-SN}=M_{\star}^{postCB}-M_{CH}-\Delta M_{loss}^{postCB} \, ,$$ where $M_{\star}^{postCB}$ is the total stellar mass at the beginning of the TP-SAGB phase and $\Delta M_{loss}^{postCB}$ is the mass lost during the TP-SAGB evolution. This last term can be estimated from the relation: $$\Delta M_{loss}^{postCB} = \overline{\dot M}_{loss} \cdot \Delta t_{CH} \, ,$$ where is the averaged mass loss rate during the TP-SAGB evolution and $\Delta t_{CH}$ is the time interval from the beginning of the TP-SAGB phase until core mass reaches $M_{CH}$, given by $$\Delta t_{CH}=\frac{M_{CH}-M_{core}^{postCB}}{\overline{\dot M}_{core}} \, .$$ In this expression, $M_{core}^{postCB}$ is equal to the H-free core mass at the beginning of the TP-SAGB phase and is equal to the averaged core growth rate during the TP-SAGB evolution.
The values reported in Tab. \[tabl1\] are calculated considering a typical core growth rate of =$5\cdot10^{-7}$$yr^{-1}$ [e.g. @Poel06; @Poel08] and choosing a reasonable value of $\zeta\equiv \frac{\overline{\dot M}_{\rm loss}}{\overline{\dot M}_{\rm
core}}=70$ during the TP-SAGB evolution (“realistic” values for this ratio vary from $\sim 35$ to $\sim 100$; see S07, for details). This choice for the $\zeta$ value corresponds to $=3.5\cdot10^{-5}$$yr^{-1}$ and is consistent with the value deduced from the observations (@Prieto08 estimated a mass loss rate $\gtrsim10^{-5}$$yr^{-1}$ for the progenitor of the SN2008S).
In the two last columns of Tab. \[tabl1\] we report the total ejected mass evaluated assuming a mass cut of $\sim$ 1.36[e.g. @kitaura06], and the maximum distance travelled by the CSM lost during the TP-SAGB evolution, calculated assuming an average wind velocity of 10 .
Although this parametric approach to determinate the structure of super-AGB stars is simplistic, it is completely consistent with the approach used to determine the fraction of super-AGB stars evolving into EC-SNe by P06 and S07, whose models are the basis for our calculation. In addition it should be noted that more sophisticated synthetic models for super-AGB stars cannot presently reach a much higher precision because no stellar models describing the TP-SAGB evolutionary phase are available at the moment.
------------------- ---------------------- --------------------- ----------------- ------------- ------------------
M$_{\star}^{ini}$ M$_{\star}^{postCB}$ M$_{core}^{postCB}$ $\Delta t_{CH}$ M$_{ej}$ D$_{CSM}^{max}$
$[$$]$ $[$$]$ $[$$]$ $[10^{5}\,yr]$ $[$$]$ $[10^{5}\,A.U.]$
9.48 9.36 1.24 3.02 $\sim$ 0.01 6.4
9.55 9.43 1.26 2.24 0.3 6.0
9.75 9.63 1.27 2.04 1.1 4.3
9.95 9.84 1.31 1.25 4.1 2.6
10.15 10.04 1.35 0.46 7.0 1.0
10.25 10.14 1.37 0.07 8.5 0.1
9.99 9.77 1.25 2.39 $\sim$ 0.01 5.0
10.01 9.82 1.26 2.23 0.3 4.7
10.15 9.97 1.28 1.77 2.4 3.7
10.35 10.17 1.31 1.11 4.9 2.4
10.55 10.38 1.35 0.46 7.4 1.0
10.65 10.49 1.36 0.10 8.8 0.2
10.44 10.15 1.24 2.52 $\sim$ 0.01 5.3
10.46 10.16 1.25 2.47 0.2 5.2
10.55 10.25 1.27 2.02 1.8 4.3
10.65 10.35 1.29 1.52 3.7 3.2
10.75 10.45 1.32 1.01 5.5 2.1
10.85 10.55 1.34 0.51 7.4 1.1
10.92 10.63 1.36 0.10 8.9 0.2
------------------- ---------------------- --------------------- ----------------- ------------- ------------------
: \[tabl1\]
Discussion and Conclusions
==========================
The two events SN2008ha and SN2008S find a reasonable interpretation in the aforementioned scenario, and the progenitor mass to be associated with these SNe can be determined, considering the best “global” matching between the features of the super-AGBs models and the observed properties.
Assuming an initial metallicity $Z$ for the progenitors from $\sim 0.008$ to $\sim
0.02$ (see e.g. V09; B09; @foley09, for details on the metallicity determination), one obtains that a super-AGB star with initial mass slightly above $M_{N}$ has M$_{core}^{postCB}
\lesssim 1.25$-$1.26$ (cf. second row in the sets of models in Tab. \[tabl1\]), while a super-AGB star with initial mass $\sim (M_{N}$+$0.5)$ has M$_{core}^{postCB} \sim
1.34$-$1.35$ (cf. the row before the last in the sets of models in Tab. \[tabl1\]). As a consequence the time $\Delta t_{CH}$ necessary to the H-free core to reach $M_{CH}$ is $\gtrsim
2.2$-$2.5\cdot10^{5}\,yr$ in the former case, and $\sim 5\cdot10^{4}\,yr$ in the latter one. This difference in the time elapsing between the beginning of TP-SAGB phase and the EC-SN event in the two cases, reflects on the configuration at the collapse. The super-AGB star with initial mass slightly above $M_{N}$ has time to expel almost all the envelope and, consequently, gives rise to a faint stripped-envelope SN characterised by a non H-rich[^1] ejecta of $\lesssim 0.2$-$0.3$ with no signatures of prompt CSM interaction, in agreement with the observations of SN2008ha (M$_{ej}$ in the range $0.1$-$0.5$, e.g. V09; @foley09). Assuming an average wind velocity of $10$, $90$% of the total expelled mass can be at a radial distance $\gtrsim 5 \cdot 10^{^4}A.U.$ when the EC-SN event takes place. The mean density of the CSM is expected to be $\lesssim5$cm$^{-3}$ (this value is likely to be even lower due to a decreased mass loss rate near the end of the TP-SAGB phase when the mass of the envelope is significantly reduced), that could be sufficiently low not to give rise to significant interaction. The relatively low X-ray emission [L$_{X} < 5 \cdot 10^{39}$ ergs$^{-1}$; @foley09] seems to support this idea, because the CSM can be an efficient X-ray radiator for much higher density [$\sim 10^5$-$10^7$cm$^{-3}$; @Cheva01].
On the contrary, the super-AGB star with initial mass $\sim (M_{N}$+$0.5)$ loses $\sim 1.6$-$1.8$ in $\sim 5\cdot10^{4}\,yr$ — consistently with the inferred duration of the so-called dust-enshrouded phase for SN2008S [upper limit equal to $\sim 6\cdot 10^{4}\,yr$; @thompson08] — and, besides maintaining a massive ($\sim 7.4$) envelope at the collapse, could be embedded within a thicker circumstellar envelope (mean density $\sim90$cm$^{-3}$). Observations of SN2008S indicate the formation of a detached shell with an inner radius of $\sim 90\, A.U.$ and outer radius of $\sim 450\,A.U.$ having $\sim 0.08$ of gas (van Loon, private communication). We could reproduce such a shell increasing the mass loss rate by $\sim 15$ times above the average value for a relatively short period of $\sim 150\, yr$ as a consequence of a He-shell flash episode [see @Matt07 for details]. In addition, we find that $\sim 95$% of the total expelled mass in the CSM is beyond the aforementioned detached shell, and these findings are in agreement with the presence of dust at a radial distance greater than $\sim 2 \cdot 10^{^3}A.U.$, as inferred from the MIR emission of SN2008S .
Thus the current understanding of super-AGB stellar evolution is quantitatively consistent with the available data on these two recent faint transients, that may be explained in terms of EC-SNe from super-AGB progenitors having a different configuration at the collapse, without resorting to “exotic” scenarios that are not free from uncertainties. As for the “special” eruption of LBV of relatively low mass proposed to explain the features of the SN2008S [@smith08], in addition to the problems for reconciling the ejecta velocity $\lesssim
3000$ with a stellar eruption (B09), it is difficult to explain the fact that the slope of the late-time light curve of SN2008S [but also that of the similar event NGC300-OT; @bond09] is surprisingly similar to that expected in a SN explosion when the main mechanism powering the SN luminosity is the radioactive decay of $^{56}$Co into $^{56}$Fe. As for the AIC mechanism invoked for the SN2008ha, the main problem concerns the high velocity ($\sim 0.1$-$0.2 c$) not observed in the ejecta and the impossibility to synthesise the observed intermediate-mass nuclei, that are predicted by the “standard” (involving a single degenerate binary system) AIC model. The so-called “enshrouded” AIC model involving the merging of two WDs in a binary system [@metzger09] might be somewhat less problematic. However the ejecta velocity, the amount of $^{56}$Ni and the production of intermediate-mass elements are still quantitatively poorly defined, and the role of the possible interaction between the disk wind and the outgoing SN shock has to be explored.
The weakness of the explosion and small amount of $^{56}$Ni synthesized make EC-SNe an obvious explanation for low-luminosity core-collapse events with unusual properties that are related to the pre-explosion mass loss episodes of their super-AGB progenitors and/or to the possible ensuing ejecta/CSM interaction. However, it has been suggested that the EC channel may also account for the properties of some relatively “normal” type II SNe [e.g. @CU00; @kitaura06], characterised by low luminosity, small amount of ejected $^{56}$Ni, extended plateaus (implying envelope mass of several ) and slow expansion velocities [e.g. @pastorello09]. To date, only for two objects of this class [SN2005cs and SN2008bk; @mau06; @li06; @matt08] clear evidence has been found for low mass progenitors on pre-explosion images, but the fact that they are super-AGBs is strongly questionned [e.g. @eldridge07]. Thus, it remains to be seen what fraction (if any) of low luminosity type II SNe are EC-SNe and what other, instead, are more usual iron CC-SNe that experience less energetic than normal explosions [as, for example, if some of them are sufficiently massive to undergo fallback onto the collapsed remnant; see e.g. @Zampieri03].
The wide variety of displays expected for EC-SNe may be of interest also in understanding the two unusual events, SN2005E and SN2005cz [@kawa09; @perets09]. Indeed this scenario can account for many of the observed characteristic of both SNe (namely low explosion energy, very low ejected mass and ejection of small amount of $^{56}$Ni), but the possibility to reproduce all the observed properties (as the spectroscopic features and, in particular, the alleged Ca-richness) deserves further investigation.
We are aware that large uncertainties of both theoretical and observational nature are still present on the EC-SN mechanism in super-AGB stars. Nevertheless we believe that the scenario herein proposed is promising for understanding an increasing number of underenergetic and unusual SNe. Only a combined effort will solve the issue. On one side we need more accurate observational constraints about the production of intermediate-mass nuclei (specifically C, O and all the $\alpha$-elements in general) in low luminosity SN events. On the other side more refined future studies on the super-AGB stellar evolution fully describing the TP-SAGB phase, and 3-D simulations for examining in detail the nucleosynthesis processes in EC-SNe are desirable.
M.L.P. acknowledges the support of the [*“Padova Città delle Stelle”*]{} prize by the City of Padua. We also thank the referee for her/his valuable suggestions to improve our manuscript.
Berger, E., et al. 2009, ApJ, 699, 1850 Bond, H. E., Bonanos, A. Z., Humphreys, R. M., Berto Monard, L. A. G., Prieto, J. L., Walter, F. M. 2009, ApJ, 695, 154L Botticella, M. T., et al. 2009, MNRAS, in press, (arXiv:0903.1286v3) (B09) Chevalier, R. A., & Fransson, C. 2001, in “Supernovae and Gamma-Ray Bursts”, Ed. Weiler (arXiv:astro-ph/01100601v1) Chugai, N. N., & Utrobin, V. P. 2000, A&A, 354, 557 Eldridge, J. J., Mattila, S., Smartt, S. J. 2007, MNRAS, 376, L25 Filippenko, A. V. 1997, ARA&A, 35, 309 Foley, R. J., et al. 2009, AJ, 138, 376 Hamuy, M. 2003, in Review for “Core Collapse of Massive Stars”, Ed. Fryer, Kluwer, Dordrecht (arXiv:astro-ph/0301006v1) Heger, A., Fryer, C. L., Woosley, S. E., Langer, N., Hartmann, D. H. 2003, ApJ, 591, 288 Herwig, F. 2005, ARA&A, 43, 435 Hillebrandt, W., Nomoto, K., Wolff, R.G. 1984, A&A, 133, 175 Hillebrandt, W., & Niemeyer, J. 2000, ARA&A, 38, 191 Jha, S., Branch, D., Chornock, R, Foley, R. J., Li, W., Swift, B., Casebeer, D., Filippenko, A. V. 2006, AJ, 132, 189 Kawabata, K. S, Maeda, K., Nomoto, K., Taubenberger, S., Tanaka, M., Hattori, T., & Itagaki, K. 2009, Nature, submitted (arXiv:0906.2811v1) Kitaura, F. S., Janka, H.-Th., & Hillebrandt, W. 2006, A&A, 450, 345 Kulkarni, S. R., et al. 2007, Nature, 447, 458 Li, W., et al. 2003, PASP, 115, 453 Li, W., et al. 2006, ApJ, 641, 1060 Mattila, S., Smartt, S. J., Eldridge, J. J., Maund, J. R., Crockett, R. M., Danziger, I. J. 2008, ApJ, 688, L91 Mattsson, L., Höfner, S., & Herwig, F. 2007, A&A, 470, 339 Maund, J. R., Smartt, S. J., Danziger, I. J. 2005, MNRAS, 364, L33 Metzger, B. D., Piro, A. L., & Quataert, E. 2009, MNRAS, 396, 1659 Miyaji, S., Nomoto, K., Yokoi, K., Sugimoto, D. 1980, Publ. Astron. Soc. Japan, 32, 303 Nomoto, K. 1984, , 277, 791 Pastorello, A., et al. 2007, Nature, 449, 1 Pastorello, A., et al. 2009, MNRAS, 394, 2266 Perets, H. B., et al. 2009, Nature, submitted (arXiv:0906.2003v1) Phillips, M. M., et al. 2007, PASP, 119, 360 Poelarends, A. J. T., Izzard, R. G., Herwig, F., Langer, N., Heger, A. 2006, Mem. S.A.It., 77, 846 Poelarends, A. J. T., Herwig, F., Langer, N., Heger, A. 2008, ApJ, 675, 614 Prieto J. L., et al. 2008, , 681, L9 Pumo, M. L. 2006, PhD thesis, University of Catania, Italy (P06) Pumo, M. L. 2007, GidA, 33, 26 Pumo, M. L., D’Antona, F., Ventura, P. 2008, , 672, L25 Ritossa, C., Garc[í]{}a-Berro, E., & Iben, I. J. 1999, , 515, 381 Siess, L. 2007, , 476, 893 (S07) Siess, L., & Pumo, M. L. 2006, Mem. S.A.It., 77, 822 (SP06) Smartt, S. J., Eldridge, J. J., Crockett, R. M., Maund, J. R. 2009, MNRAS, 395, 1409 Smith, N., et al. 2009, ApJ, 697, L49 Stanishev, V., Pastorello, A., Pursimo, T. 2008, CBET, 1236, 2 Steele, T. N., Silverman J. M., Ganeshalingam, M., Lee, N., Li, W., Filippenko, A. V. 2008, CBET, 1275, 1 Thompson, T. A., Prieto, J. L., Stanek, K. Z., Kistler, M. D., Beacom, J. F., Kochanek, C. S. 2008, preprint (arXiv:0809.0510) Turatto, M. 2003, LNP, 598,21 Turatto, M., Benetti, S., & Pastorello, A. 2007, AIPC, 937, 187 Valenti, S., et al. 2009, Nature, 459, 674 (V09) Wanajo, S., Nomoto, K., Janka, H.-Th., Kitaura, F. S., & Müller, B. 2009, ApJ, 695, 208 Woosley, S. E., & Weaver, T. A. 1986, ARA&A, 24, 205 Woosley, S. E., Heger, A., Weaver, T. A. 2002, RvMP, 74, 1015 Zampieri, L., Pastorello, A., Turatto, M., Cappellaro, E., Benetti, S., Altavilla, G., Mazzali, P., & Hamuy, M. 2003, MNRAS, 338, 711
[^1]: We do not have accurate quantitative informations about the chemical composition of the ejecta (except for the $^{56}$Ni) to be compared to observations of SN2008ha, due to uncertainties of both observational and theoretical nature. However, we speculate that the composition could be non H-rich. In fact, for this model the ejecta is composed by the H-free stellar layer between the mass cut and $M_{CH}$ (representing $\sim$ 5-15% by mass of all the ejected mass) and by the remaining envelope mass at the explosion, whose “initial” H-rich composition can be deeply altered by the second dredge-up phenomenon, the so-called Hot Botton Burning, and the third dredge-up episodes.
|
---
abstract: 'In this note, we provide critical commentary on two articles that cast doubt on the validity and implications of Birnbaum’s theorem: Evans (2013) and Mayo (2014). In our view, the proof is correct and the consequences of the theorem are alive and well.'
author:
- 'Víctor Peña, James O. Berger'
bibliography:
- 'objectionslp.bib'
title: 'A note on recent criticisms to Birnbaum’s theorem'
---
Introduction {#sec:introduction}
============
Birnbaum’s theorem [@birnbaum1962foundations] states that two statistical principles that are intuitively reasonable, the weak conditionality principle and the sufficiency principle, imply the likelihood principle, which is violated by statistical procedures such as $p$-values or reference priors. Ever since the result was published, there has been a lively discussion on its validity and implications. The monograph [@berger1988likelihood] contains a defense of the likelihood principle and responses to criticisms up to the date it was published, but the flow of articles has not stopped in the fields of statistics and philosophy of science (for example, [@helland1995simple], [@bjornstad1996generalization], [@robins2000conditioning], [@sweeting2001coverage], [@wechsler2008birnbaum], [@grossman2011likelihood], [@gandenberger2014new]). Somewhat recently, the articles [@evans2013does] and [@mayo2014] have cast doubt on the validity and implications of Birnbaum’s theorem, and the goal of this note is to review and discuss their content.\
First, we introduce our basic notation and definitions for statistical experiment, inference bases, and informative inference:
- **Statistical experiment:** A triplet $E = \{\mathcal{X}_E, \Theta_E, p_{\theta,E}\}$, where $\mathcal{X}_E$ is the sample space of the experiment, $\Theta_E$ is the parameter space, and $p_{\theta,E}$ is the sampling distribution of $E$ for $\theta \in \Theta$. As it is usual in the literature, we avoid measure-theoretical details by considering experiments with a discrete support (see Section 3.4 in [@berger1988likelihood] for generalizations).
- **Inference base:** A tuple $(E,x)$ where $E$ is a statistical experiment and $x \in \mathcal{X}_E$ is an outcome from $E$.
- **Informative inference:** $\mathbf{Ev}(E,x)$ is the informative inference (or conclusion) made by an agent given $(E,x)$. If $\mathcal{I}$ is the space of inference bases, one can think of Ev as a function from $\mathcal{I}$ to a set $\mathcal{D}$ of possible inferences.
- **Inferentially equivalent:** Two inference bases $(E,x)$ and $(E',x')$ are inferentially equivalent if $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ (the same inferences are made given $(E,x)$ or $(E',x')$).
Given the definitions above, we define the statistical principles at stake: the weak conditionality principle (WCP), ancillarity principle (AP), sufficiency principle (SP), and the likelihood principle (LP):
- **Weak Conditionality Principle (WCP):** Consider the statistical experiments $E_1 = (\mathcal{X}_{E_1}, \Theta, p_{\theta,E_1})$, $E_2 = (\mathcal{X}_{E_2}, \Theta, p_{\theta,E_2})$ and a 50-50 mixture between $E_1$ and $E_2$, which we denote $E_{\mathrm{mix}}$. Conceptually, one can imagine that a fair coin is tossed: if it lands heads, $E_1$ is performed; if it lands tails. $E_2$ is performed. Formally, the outcome of the mixture experiment will be a pair $(j,x),$ where $j$ indicates the experiment that was performed ($j = 1$ if $E_1$ was performed, and $j=2$ if $E_2$ was performed instead), and $x \in \mathcal{X}_{E_1} \cup \mathcal{X}_{E_2}$ is the outcome of the experiment that was performed. WCP states that the informative inference given $(E_{\mathrm{mix}}, (j, x))$ from the mixture experiment should be equal to the informative inference given the inference base of the component experiment $(E_j, x)$; that is, $\mathbf{Ev}(E_{\mathrm{mix}}, (j, x)) = \mathbf{Ev}(E_j, x)$.
- **Ancillarity Principle (AP):** Let $U$ be an ancillary statistic for $\theta$ (the distribution of $U$ does not depend on $\theta$) for which the value $u$ is observed. Then, $\mathbf{Ev}(E, (u,x)) = \mathbf{Ev}(E_{\mid U=u}, x)$, where the sampling distribution associated with $E_{\mid U=u}$ is $p_{\theta, E \mid U=u}(\cdot)$ (the conditional probability mass function of $x$ given $U=u$). In words, the ancillarity principle states that conditioning on an ancillary statistic should not change our informative inference. This principle is also known as the (strong) conditionality principle. Clearly, the selection of the component in WCP is an example of an ancillary statistic, so AP implies WCP.
- **Sufficiency Principle (SP):** If $(E,x)$ and $(E,x')$ are such that $T(x) = T(x')$ for a sufficient statistic $T$ for $\theta$, then $\mathbf{Ev}(E, x) = \mathbf{Ev}(E,x')$.
- **Likelihood Principle (LP):** If $(E,x)$ and $(E',x')$ are such that $p_{\theta,E}(x) = c \, p_{\theta, E'}(x')$ for $c > 0$ that does not depend on $\theta$, then $\mathbf{Ev}(E,x) = \mathbf{Ev}(E,x')$.
In our framework, $\mathbf{Ev}$ can be any function from the space of inference bases to inferences, and the mathematical role of statistical principles is restricting the set of functions that one is allowed to use. As explained in more detail in Section \[sec:evans\], Evans’ objections arise because a map $\mathbf{Ev}$ is not introduced. Conversely, in Section \[sec:mayo\] we show that the definition of the sufficiency principle in [@mayo2014] is different from SP (as defined in the paragraph above) and blocks Birnbaum’s proof.
Evans’ objections {#sec:evans}
=================
Evans defines the statistical principles as the following set relations on $\mathcal{I} \times \mathcal{I}$:[^1]
- $C$: $(E,x) \sim_{C} (E',x')$ if and only if $E = E_{\mathrm{mix}}$, $x = (j,x_j)$, $E' = E_j$, and $x' = x_j$ as in the definition of WCP in Section \[sec:introduction\] (or with roles of $(E,x)$ and $(E',x')$ reversed).
- $A$: $(E,x) \sim_{A} (E',x')$ if and only if $x = (u,x')$ and $E' = E_{|U=u}$, where $U = u$ and $E_{|U=u}$ are as defined for AP in Section \[sec:introduction\] (or with roles of $(E,x)$ and $(E',x')$ reversed).
- $S$: $(E,x) \sim_{S} (E',x')$ if and only if there exists a sufficient statistic $T$ for $\theta$ such that $T(x) = T(x')$.
- $L$: $(E,x) \sim_{L} (E',x')$ if and only if $p_{\theta, E}(x) = c \, p_{\theta, E'} (x')$ for a constant $c > 0$ which does not depend on $\theta$.
This approach is different from the one taken in Section \[sec:introduction\] and the one in [@birnbaum1962foundations] because $\mathbf{Ev}$ is not defined or used at all in the definitions. Nonetheless, the set relations are very similar to the principles defined in Section \[sec:introduction\]: they are of the form $(E,x) \sim_{P} (E',x')$ if and only if $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ by an application of a principle statistical $P$. According to Evans, “A basic step missing in Birnbaum (1962) was to formulate the principles as relations on the set $\mathcal{I}$ of all model and data combinations.” But the definition of a function $\mathbf{Ev}$ automatically induces an equivalence relation on $\mathcal{I} \times \mathcal{I}$ (the kernel of $\mathbf{Ev}$): $(E,x) \sim (E,x')$ if and only if $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$. If we accept WCP and SP as defined in Section \[sec:introduction\], the equivalence relation on $\mathcal{I} \times \mathcal{I}$ induced by accepting WCP and SP implies that bases with proportional likelihoods are equivalent because they map to the same value.\
Evans shows that statistical principles defined as set relations need not be equivalence relations (for instance, $A$ and $C$ as defined above are not). Even if two statistical principles are equivalence relations, their union may not be because it could fail to be transitive: if $P_1$ and $P_2$ are set relations formalizing statistical principles, it is possible that the inference bases $(E_1, x_1)$ and $(E_2, x_2)$ are inferentially equivalent with respect to $P_1$ and $(E_2, x_2)$ and $(E_3, x_3)$ are inferentially equivalent according to $P_2$ but $(E_1, x_1)$ and $(E_3, x_3)$ are not inferentially equivalent according to either $P_1$ or $P_2$ alone.\
Birnbaum’s argument is a neat (and in our view, transparent) illustration of this phenomenon: the inference bases with proportional likelihoods are shown to be equivalent by a chain of applications of WCP and SP, but they are not equivalent according to either WCP or SP individually. This implies that $L \neq S \cup C$. The correct result is that $L$ is equal to the smallest equivalence relation generated by $S \cup C$, and Evans argues that extending statistical principles that are originally defined as set relations to equivalence relations requires further justification.\
Here is a simple one: if we define the sufficiency principle and the weak conditionality principle as the set relations $S$ and $C$ and introduce a function $\mathbf{Ev}$ with the minimal requirement that $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ if and only if $(E,x) \sim_{C} (E',x')$ or $(E,x) \sim_{S} (E',x')$, the equivalence relation on $\mathcal{I} \times \mathcal{I}$ generated by $\mathbf{Ev}$ is precisely the smallest equivalence relation generated by $S \cup C$, which in this case is $L$. In general, if we define statistical principles $P_1, P_2, \, ... \, , P_k$ as set relations on $\mathcal{I} \times \mathcal{I}$ and introduce $\mathbf{Ev}$ with the property $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ if and only if $(E,x) \sim_{P_i} (E',x')$ for some $i \in \{1,2, \, ... \, k\}$, the equivalence relation on $\mathcal{I} \times \mathcal{I}$ induced by $\mathbf{Ev}$ is equal to the smallest equivalence relation generated by $P_1, P_2, \, ... \, , P_k$. Defining statistical principles as set relations on $\mathcal{I} \times \mathcal{I}$ and introducing $\mathbf{Ev}$ as we just did is equivalent to stating the definitions in terms of $\mathbf{Ev}$ in the first place as in Section \[sec:introduction\].\
Since the notation $\mathbf{Ev}$ is very explicit in [@birnbaum1962foundations], we believe that the definition of the principles in terms of set relations was implied by the fact that $\mathbf{Ev}$ is a function. But even within a framework where $\mathbf{Ev}$ is not defined, the smallest equivalence relation generated by a collection of principles has a straightforward interpretation: its elements are (exclusively) the result of a chain of applications of the principles we wish to respect. Rejecting the extension implies rejecting the equivalence of inference bases that can be shown to be equivalent by a number of applications of our principles. We believe, then, that the extension is also justified if $\mathbf{Ev}$ is not introduced.\
Now we turn to an example in [@evans2013does] that shows that $A$ is not transitive and illustrates some of the issues that were commented in the paragraphs above.
\[ex:evans\] ([@evans2013does], pg. 2651) Let $\mathcal{X}_E = \{1,2\}\times\{1,2\}$, $\Theta_E = \{1,2\}$, with $p_{E,\theta}$ given in Table \[tab:unconditional\]. Both $U(x_1, x_2)= x_1$ and $V(x_1,x_2)=x_2$ are ancillary, and the conditional models upon observing $U = 1$ and $V = 1$ are given in Tables \[tab:condu\] and \[tab:condv\]. This example shows that $A$ is not transitive: $(E,(x_1,x_2)) \sim_{A} (E_{\mid U}, x_2)$ and $(E,(x_1,x_2)) \sim_{A} (E_{\mid V}, x_1)$, but $(E_{\mid U}, x_2) \not \sim_A (E_{\mid V}, x_1)$ because there is no ancillary statistic linking the two conditional models. However, using the definitions in Section \[sec:introduction\] (or equivalently, using $A$ and introducing Ev with the property $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ if and only if $(E,x) \sim_{A} (E',x')$), we have $\mathbf{Ev}(E,(x_1,x_2)) = \mathbf{Ev}(E_{\mid U}, x_2)$ and $\mathbf{Ev}(E,(x_1,x_2)) = \mathbf{Ev}(E_{\mid V}, x_1)$, so $ \mathbf{Ev}(E_{\mid U}, x_2) = \mathbf{Ev}(E_{\mid V}, x_1)$.
$(x_1,x_2)$ $(1,1)$ $(1,2)$ $(2,1)$ $(2,2)$
---------------------------- --------- --------- --------- ---------
$f_{E,\theta=1}(x_1, x_2)$ 1/6 1/6 2/6 2/6
$f_{E,\theta=2}(x_1,x_2)$ 1/12 3/12 5/12 3/12
: Unconditional model (rows: sampling distributions for $\theta \in \{1,2\}$)[]{data-label="tab:unconditional"}
$(x_1,x_2)$ $(1,1)$ $(1,2)$ $(2,1)$ $(2,2)$
------------------------------------- --------- --------- --------- ---------
$f_{E,\theta=1}(x_1, x_2 \mid U=1)$ 1/2 1/2 0 0
$f_{E,\theta=2}(x_1,x_2 \mid U=1)$ 1/4 3/4 0 0
: Conditional model when $U =1$ (rows: sampling distributions for $\theta \in \{1,2\}$)[]{data-label="tab:condu"}
$(x_1,x_2)$ $(1,1)$ $(1,2)$ $(2,1)$ $(2,2)$
------------------------------------- --------- --------- --------- ---------
$f_{E,\theta=1}(x_1, x_2 \mid V=1)$ 1/3 0 2/3 0
$f_{E,\theta=2}(x_1,x_2 \mid V=1)$ 1/6 0 5/6 0
: Conditional model when $V =1$ (rows: sampling distributions for $\theta \in \{1,2\}$)[]{data-label="tab:condv"}
Quoting [@evans2013does]: “Saying that such models \[the conditional models in Tables \[tab:condu\], \[tab:condv\]\] contain an equivalent amount of statistical information is clearly a substantial generalization of \[$A$\]. To measure the accuracy of this estimate we can compute the conditional probabilities based on the two inference bases, namely, $$\mathbb{P}_{\theta=1}(\widehat{\theta} = 1 \mid U = 1) = 1/2, \, \, \, \mathbb{P}_{\theta=2}(\widehat{\theta} = 1 \mid V = 1) = 3/4$$ and so the accuracy of $\widehat{\theta}$ is quite different depending on whether we \[condition on $U$ or $V$\]. It seems unlikely that we would interpret these inference bases as containing an equivalent amount of information in a frequentist formulation of statistics.”\
Concluding that the inference bases are equivalent with respect to $A$ is a consequence of introducing $\mathbf{Ev}$ with the property $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ if and only if $(E,x) \sim_A (E',x')$. Also, the likelihood ratio of $\theta = 1$ to $\theta = 2$ equals 2 if we condition on either $U$ or $V$, which is unsurprising because, as Evans proves, AP equals LP. We agree with Evans in that this example shows that accepting AP can be problematic for frequentist statisticians: there are two ancillary statistics we can condition on, there is no apparent reason one should prefer one over the other and, unfortunately, standard errors and $p$-values depend depend on the choice of ancillary. We return to this point in Section \[sec:ancillaries\].\
After showing that AP is equivalent to LP, Evans concludes that SP is redundant in Birnbaum’s argument. Then, Example \[ex:evans\] leads him to cast doubt on the impact of Birnbaum’s result because he believes that many statisticians would not accept AP (or equivalently, $A$ and the equivalences generated by the principle). But SP is certainly not redundant if only WCP is assumed (recall that WCP only requires equivalence of 50-50 mixtures), and WCP and SP also imply LP. In Example \[ex:evans\], the conditional experiments are not equivalent according to WCP, and the smallest equivalence relation containing $C$ would only add cases where mixture experiments with different components (or different probabilities of performing them) were considered, but the same component experiment was performed and the same result was obtained. Finally, we agree with Evans that accepting statistical principles may induce unexpected equivalences between inference bases, which is precisely what makes Birnbaum’s result surprising and relevant.
Mayo’s objections {#sec:mayo}
=================
In our view, the objections to Birnbaum’s proof in [@mayo2014] stem from using a definition for the sufficiency principle that is different from that in Section \[sec:introduction\]. We believe that introducing new notation that makes an explicit distinction between the output of methods and the inference made by an agent that is using them is helpful for understanding the arguments:
- $\mathbf{M}(E, x)$: Result of a applying a method $M$ to the inference base $(E,x)$.
- $\mathbf{Ev}(E,x)$: Inference made by an agent given $(E,x)$ (as in Section \[sec:introduction\]).
Given $(E,x)$, the agent makes informative inferences $\mathbf{Ev}(E.x)$ by means of $\mathbf{M}(E',x')$ for some $(E',x')$ which may not be equal to $(E,x)$. The interpretation of $\mathbf{M}(E,x) = \mathbf{M}(E',x')$ is that the “output” of applying a method $\mathbf{M}$ to $(E,x)$ and $(E',x')$ is the same (one can imagine that $\mathbf{M}$ is a function in some programming language that takes $E$ and $x$ as inputs), whereas $\mathbf{Ev}(E,x) = \mathbf{Ev}(E',x')$ means that an agent makes the same informative inferences given $(E,x)$ or $(E',x')$. This distinction is somewhat obscured in [@mayo2014], as she defines
- $\mathrm{Infr}_E[x]$: The parametric statistical inference from a given or known $(E,z)$.
- $(E',x') \Rightarrow \mathrm{Infr}_E[x]$: An informative parametric inference about $\theta$ from given $(E,x)$ is to be computed by means of $\mathrm{Infr}_E[x]$.
The definition of $\mathrm{Infr}_E[x]$ and the name “Infr” suggest that $\mathrm{Infr}_E[x] = \mathbf{Ev}(E,x)$. However, the second definition implies that $\mathrm{Infr}_E[z]$ need not be equal to the final inference $\mathbf{Ev}(E,x)$. This is explicit in her definition of the weak conditionality principle (WCP):
- **WCP:** Given $(E_\mathrm{mix}, (j,x_j))$, condition on the $E_j$ producing the result: $(E_\mathrm{mix}, (j,x_j)) \Rightarrow \mathrm{Infr}_{E_j}[x_j]$. Do not use the unconditional formulation: $(E_\mathrm{mix}, (j,x_j)) \not \Rightarrow \mathrm{Infr}_{E_\mathrm{mix}}[(j,x_j)]$.
Using our notation, this definition is equivalent to the WCP in Section \[sec:introduction\]. However, Mayo defines the sufficiency principle as follows
- **SP2**: If there exists a sufficient statistic $T$ for $\theta$ and $T(x) = T(x')$, then $\mathrm{Infr}_E[x] = \mathrm{Infr}_E[x']$,
which is different from SP, and can be recast as
- **SP2:** If there exists a sufficient statistic $T$ for $\theta$ and $T(x) = T(x')$, then $\mathbf{M}(E, x) = \mathbf{M}(E, x')$.
The key point is that WCP is a property of $\mathbf{Ev}$ and SP2 is a property of $\mathbf{M}$. If this distinction is made, LP does not follow. The distinction between $\mathbf{Ev}$ and $\mathbf{M}$ is not made in [@birnbaum1962foundations] or Section \[sec:introduction\]. The following example, which is a slight modification of the example presented in Section 4. in [@mayo2010iii], puts the notation in context and makes clear why WCP and SP2 do not imply LP.
\[ex:mayo\] Consider binomial and negative binomial experiments $$E_1 = \{ \{0,1,2, \, ... \, , n\}, \, \Theta, \, \mathrm{Binomial}(n, \theta) \}, \qquad E_2 = \{ \{0,1,2, \, .... \}, \, \Theta, \, \mathrm{NegBinomial}(k, \theta) \}.$$ Suppose that a fair coin is flipped and $E_1$ is performed if the coin lands heads and $E_2$ is performed if it lands tails. Let $E_{\text{mix}}$ denote the “mixture” experiment. The outcome of $E_{\text{mix}}$ is $(j, x)$, with $j \in \{1,2\}$ ($j=1$ if $E_1$ is performed and $j=2$ if $E_2$ is performed) and $x = (k, n-k)$, where $k$ and $n-k$ are the number of successes and failures observed after performing $E_j$. The statistical method ${M}(E,x)$ is the one-sided $p$-value for testing $\theta = \theta_0$ against $\theta > \theta_0$: $$\begin{aligned}
\mathbf{M}(E_1, x) &= \mathbb{P}( \mathrm{Binomial}(n,\theta_0) \ge x) \\
\mathbf{M}(E_2, x) &= \mathbb{P}( \mathrm{NegBinomial}(r,\theta_0) \ge x) \\
\mathbf{M}(E_{\mathrm{mix}}, x) &= 0.5 \, \mathbb{P}( \mathrm{Binomial}(n,\theta_0) \ge x) + 0.5 \, \mathbb{P}( \mathrm{NegBinomial}(k,\theta_0) \ge x).\end{aligned}$$ We assume that the agent makes inference using the rule $\mathbf{Ev}(E_{\mathrm{mix}}, (j,x)) = \mathbf{M}(E_{j}, x)$. The statistic $T(j,x) = (1,x)$ is sufficient for $\theta$ with respect to $E_{\text{mix}}$, and it satisfies both $T(1,x) = T(2,x)$ and $\mathbf{M}(E_{\text{mix}},(1,x)) = \mathbf{M}(E_{\text{mix}},(2,x))$, so SP2 is respected. WCP is automatically satisfied because the inference rule is $\mathbf{Ev}(E_{\text{mix}}, (j,x)) = \mathbf{M}(E_{j}, x) = \mathbf{Ev}(E_j, x_j)$ (the inference rule is chosen so that WCP is respected). It follows that WCP and SP2 do not imply LP.
According to the definitions in [@mayo2014], WCP and SP2 do not imply LP, as seen in the example above. Where does Birnbaum’s proof go wrong? With WCP as stated, the mixture experiments are inferentially equivalent to the performed components: $\mathbf{Ev}(E_{\mathrm{mix}}, (1,x_1)) = \mathbf{Ev}(E_1, x_1)$ and $\mathbf{Ev}(E_{\mathrm{mix}}, (2,x_2))$ $= \mathbf{Ev}(E_2, x_2)$. However, SP2 does not imply $\mathbf{Ev}(E_{\mathrm{mix}}, (1,x_1))$ $= \mathbf{Ev}(E_{\mathrm{mix}}, (2,x_2))$: instead, it requires $\mathbf{M}(E_{\mathrm{mix}}, (1,x_1)) = \mathbf{M}(E_{\mathrm{mix}}, (2,x_2))$. However, $\mathbf{Ev}(E_{\mathrm{mix}}, (1,x_1))$ need not be equal to $\mathbf{Ev}(E_{\mathrm{mix}}, (2,x_2))$. These definitions allow Mayo to claim that, in Example \[ex:mayo\], reporting the conditional $p$-value according to the sampling distribution of the component experiment that was performed does not violate the sufficiency principle. In contrast, reporting the conditional $p$-value is a violation of SP as defined in Section \[sec:introduction\] (and the proof of WCP and SP implies LP goes through as usual). Critically, note that SP states that if there exists a sufficient statistic, the inferences bases are inferentially equivalent, but there is no requirement that said sufficient statistic be used for our final inferences. If that were the case, it would imply that SP instructs to use of the unconditional $p$-value. The reason that Birnbaum’s proof does not go through in this framework hinges on the distinction of $\mathbf{Ev}$ and $\mathbf{M}$: if we define a new WCP2 as a property of $\mathbf{M}$ (so that both SP2 and WCP2 were properties of $\mathbf{M}$), reporting the conditional $p$-value in Example \[ex:mayo\] would violate WCP2, as $\mathbf{M}(E_j, x_j) \neq \mathbf{M}(E_{\mathrm{mix}},(j,x_j))$ (and WCP2 and SP2 would, of course, imply a version of LP written in terms of $\mathbf{M}$).
Can AP be applied in frequentist statistics? {#sec:ancillaries}
============================================
We briefly discuss the applicability of the ancillarity principle in frequentist inference, motivated by comments in [@coxmayo2010], [@evans2013does], and [@mayo2014]. Since AP is equivalent to LP, frequentist statisticians that want to make conditional frequentist statements have to propose restricted versions of AP. Additionally, it is of utmost importance to find well-defined criteria for choosing among ancillaries because, as we have seen in Example \[ex:evans\], there are instances where there are multiple ancillaries one can condition on that give rise to different conditional $p$-values or standard errors. Some authors have proposed restricting the set of ancillaries to condition on ([@durbin1970birnbaum], [@kalbfleisch1975]), but this approach is problematic because there are examples where several ancillaries satisfy the restrictions (see [@basu1964recovery] for examples and [@dawid2011basu] for a concise and lucid review on the ancillarity principle and the issues that have been mentioned in this paragraph). To the best of our knowledge, there is no (restricted) formulation of AP that instructs which ancillary one should use for any given problem, and as a result is no adequate definition for a restricted ancillarity principle that is not equivalent to the likelihood principle ([@cox1971choice] provides a heuristic that works when applied to an example in [@basu1964recovery], but it does not give a definite answer in other problems and it is not regarded as a general solution to this problem). Another issue is that there are examples where a conditional analysis is clearly desirable, but useful ancillary statistics are not available. We present two examples below.
\[ex:example8\] (Example 8 in [@berger1988likelihood]) Let $\Theta = [0,1)$, $P(X = \theta) = 1-\theta$, and $P(X = 0) = \theta$. Consider the confidence set $C = \{X\}$. Unconditionally, $P(\theta \in C) = 1-\theta$. However, if $X > 0$, we know that $C = \{X\}$ contains $\theta$ with probability 1, but $X$ is not ancillary, so the ancillarity principle would not allow conditioning on its value.
\[ex:noancillary\] Let $X_1,X_2$ be independent and identically distributed random variables with $P(X_i = \theta-1) = P(X_i = \theta+1) = 1/2$ for $i \in \{1,2\}$. Let $D = |X_1-X_2|/2$, which is ancillary with $P(D=1) = P(D=0) = 1/2$. Suppose we want to evaluate the quality of the estimator $T = X_{(1)}+1$ ($X_{(1)}$ is the minimum of $X_1$ and $X_2$). Conditioning on $D$, we know that $P(T = \theta \mid D = 1) = 1$ and $P(T = \theta \mid D = 0) = 1/2$, and [@coxmayo2010] would propose reporting inferences conditional on $D$ because it is more informative than an unconditional analysis. But now consider the following modification: $P(X_i = \theta+1) = 1/2+\theta \epsilon$ and $P(X_i = \theta -1) = 1/2-\theta \epsilon$ for a known $\epsilon \in [0, 1]$ and $\theta \in [-1/(2\epsilon),1/(2\epsilon)]$. The original example is a particular case with $\epsilon = 0$. If $\epsilon \neq 0$, $D$ is not ancillary anymore, despite the fact that if $\epsilon$ is small (say $\epsilon = 10^{-100}$) we are essentially in the same situation as if $\epsilon = 0$. In addition, if $\epsilon \neq 0$, there are (even more) cases where we can retrieve $\theta$ with probability 1 given the data. Indeed, if $X_1 \neq X_2$ we still have that $\theta = X_{(1)}+1$, but now there are cases where we know the value of $\theta$ exactly even if $X_{(1)} = X_{(2)}$. Let $A_{\theta-1} = [-1/(2\epsilon)-1, 1/(2\epsilon)-1]$ and $A_{\theta+1} = [-1/(2\epsilon)+1,1/(2\epsilon)+1]$. If $X_{(1)} \in A_{\theta-1} \setminus A_{\theta+1}$, then $\theta = X_{(1)}+1$; analogously, $\theta = X_{(1)}-1$ whenever $X_{(1)} \in A_{\theta+1} \setminus A_{\theta-1}$. Note that if $\epsilon > 1/2$, $A_{\theta-1} \cap A_{\theta+1} = \emptyset$ and we can always retrieve the value of $\theta$. If we want to assess the performance of $T$ conditionally, we know that $$\begin{aligned}
P(T &= \theta \mid X_{(1)} \neq X_{(2)}) = 1 \\
P(T &= \theta \mid X_{(1)} = X_{(2)}, X_{(1)} \in (A_{\theta-1}\setminus A_{\theta+1})) = 1 \\
P(T &= \theta \mid X_{(1)} = X_{(2)}, X_{(1)} \in (A_{\theta+1}\setminus A_{\theta-1})) = 0 \\
P(T &= \theta \mid X_{(1)} = X_{(2)}, X_{(1)} \in A_{\theta-1} \cap A_{\theta+1} )= \frac{(1/2-\theta \epsilon)^2}{(1/2-\theta \epsilon)^2+(1/2+\theta \epsilon)^2},\end{aligned}$$ but unconditionally $P(T = \theta) = 1 - (1/2+\theta \epsilon)^2$, which depends on $\theta$ and ranges from 1 to 0 for $\theta \in [-1/(2\epsilon),1/(2\epsilon)]$. Therefore, the confidence level of the set $C = \{T\}$ is $\inf P_\theta(\theta \in C) = 0$, which is clearly undesirable and misleading (especially in cases where $\epsilon > 1/2$, where a conditional analysis reveals if $T = \theta$ with probability 0 or 1 depending on the data). As an aside, a modified estimator that takes on the value $X_{(1)}-1$ whenever $X_{(1)} = X_{(2)}$ and $X_{(1)} > 0$ has better performance, but we used $T$ for illustrative purposes.
Finally, we note that applying the ancillarity principle can be suboptimal according to strictly frequentist criteria: in practice, there are cases where an unconditional test is preferable to a conditional test, as in the following example inspired by an Example in [@cox1958some].
Suppose a production line is periodically tested to see if it is operating correctly. If correct, it produces a part of diameter 1. Periodically it goes out of line and then produces parts with diameter 1.1. In the testing, the parts are measured with one of two measuring instruments, an old one which produces a normal observation with mean the true diameter of the part and standard deviation 0.1, and a new measuring instrument which produces a normal observation with mean the true diameter and standard deviation 0.05. The old and new measuring instruments are each available with probability 1/2 (as there is another production line for which they are also used). If the production line is deemed to be out of line, it must be shut down and reset, at considerable expense. The company does a cost-benefit analysis and determines that it will be optimal to control overall Type I error in the testing at the 0.05 level. This is a scenario in which frequentist analysis is absolutely appropriate, in that there is true long-term repetition of the test. Also, the cost-benefit analysis is presumably carried out in a Bayes-frequentist sense, since historical levels of in-line and out-of-line must be taken into account. If the company followed WCP, they would do the 0.05 level test conditional on which measuring instrument is being used at each test. But this will lose the company money, as the power of this test for detecting an out-of-line process (which is 0.646) is 9% less than that of the most powerful test (which is 0.694). This most powerful test corresponds to using Type I error probabilities of 0.099 and 0.001 for the old and new measuring instruments, respectively.
The example above is interesting in that it suggests that, for frequentists, the only way to implement the conditionality principle is to use a method that is compatible with Bayesian reasoning (as the unconditional test would be equivalent to the Bayes rule with respect to the loss function implied by the cost-benefit analysis). This is not surprising, given the complete class theorems that show that optimal frequentist decision procedures are necessarily Bayesian.
Conclusions
===========
The articles [@evans2013does] and [@mayo2014] contain thought-provoking discussions about the conditions under which the result in [@birnbaum1962foundations] is valid, but that neither of them show that WCP and SP do not imply LP according to the definitions in Section \[sec:introduction\], which, in our view, are equivalent to the definitions in [@birnbaum1962foundations].\
Evans avoids introducing Ev, which is central in Birnbaum’s argument, and defines statistical principles as set relations on the (product) space of inferences. If $\mathbf{Ev}$ is introduced with the property $\mathbf{Ev}(E,x)$ = $\mathbf{Ev}(E',x')$ if and only if $(E,x) \sim_C (E',x')$ or $(E,x) \sim_S (E',x')$ Birnbaum’s result follows. If we stick to Evans’ framework, the union of the set relation defined by the sufficiency principle ($S$) and the conditionality principle ($C$) does not equal the set relation defined by the likelihood principle ($L$). This result might seem surprising at first glance but, if it were true, two inference bases with proportional likelihoods would be equivalent according to either the sufficiency principle or the conditionality principle individually, which is clearly false. What is true is that the smallest equivalence relation generated by $S \cup C$ equals $L$. As explained in Section \[ex:evans\], the equivalence relation generated by $S \cup C$ only contains inference bases that are equivalent to a chain of applications of the principles.\
Mayo defines statistical principles making a distinction between the output of methods ($\mathbf{M}$) and the inferences that are made by an agent using them ($\mathbf{Ev}$): the weak conditionality principle is defined as a property of $\mathbf{Ev}$, whereas the sufficiency principle is defined as a property of $\mathbf{M}$. For example, this distinction allows Mayo to claim that in a mixture experiment where a Negative Binomial or Binomial experiment is selected with equal probability, reporting the conditional $p$-value does not result in a violation of the sufficiency principle (see Example \[ex:mayo\]). In the framework of [@mayo2014], the weak conditionality principle and the sufficiency principle do not imply the likelihood principle, but the definition of the sufficiency principle differs from that in [@birnbaum1962foundations] because the distinction between the ouput of methods and informative inference is not made.
[^1]: We use a slightly different notation than in [@evans2013does]. We define $C$ as a formalization of WCP and $A$ as a formalization of AP. However, [@evans2013does] does not consider WCP at all and defines a set relation (which is denoted $C$ in Evans’ article) which is equivalent to our $A$. We apologize for the possible confusion that this change might cause.
|
---
abstract: 'The pepper-pot method is a popular emittance measurement technique for high intensity beams at low energy such as those generated by photo-injectors. In this paper, the beam dynamics in the space charge dominated regime and analytical design criteria for a mask-based emittance measurement (pepper-pot method) are revisited. A tracking code developed to test the performance of a pepper-pot setup is introduced. Examples of such testing are presented with particle distributions that were generated using PARMELA under different focusing conditions. These distributions were numerically tested against a series of mask geometries suggested by analytical criteria. The resulting fine-tuned geometries and beam dynamics features observed are presented.'
address: |
The University of Manchester, Manchester M13 9PL, United Kingdom\
also The Cockcroft Institute of Accelerator Science and Technology, Warrington WA4 4AD, United Kingdom
author:
- 'O. Apsimon[^1], B. Williamson and G. Xia'
title: 'A Numerical Approach to Designing a Versatile Pepper-pot Mask for Emittance Measurement[^2]'
---
introduction
============
The pepper-pot method is a well-known technique for phase space characterisation at low energies, before energy boosting, where the space charge force is significant for high brightness beams [@Zhang; @Anderson]. It is widely used in radio-frequency photo-injectors which produce high brightness electron beams for light sources [@PSI; @SPARC] and test facilities for other large scale applications [@UCLA; @FNAL; @CLIC]. Today, this method continues to be popular to measure the phase space of electron beams generated by conventional and advanced accelerators [@AWAKE; @VELA; @AlphaX; @AWA].
In this paper, we recapitulate the beam dynamics in the operating regime of the pepper-pot emittance measurement by studying the envelope equation for a space charge dominated beam, and numerically demonstrate the interplay between defocusing due to space charge and beam emittance. For these simulations, a typical photo-injector model was implemented in PARMELA [@PARMELA] as sketched in Fig. \[fig:layout\]. This includes an RF gun at 3$\,$GHz working at an on-axis field of 100$\,$MV/m [@PHIN] followed by a travelling wave booster structure at the same frequency and working at 15$\,$MV/m [@Booster]. A pepper-pot setup is envisaged to be located after the RF gun prior to the booster structure (132$\,$cm after the cathode) and an analytical approach for its design is presented. It is discovered that a design purely based on analytical criteria does not always generate reliable results hence has to be validated and fine-tuned numerically. Consequently, a tracking algorithm is introduced to realise the transport through the mask and perform these validations.
![The layout of a generic RF injector.[]{data-label="fig:layout"}](layout.pdf){width="45.00000%"}
In terms of beam dynamics, the working point of a photo-injector generally is where beam is quasi-laminar at the start of the booster linac (in the vicinity of the mask) under appropriate focusing (provided by solenoids) which mainly performs the matching of the beam envelope. This ensures emittance compensation under space charge by aligning the phase space angle of each slice and minimising the projected emittance [@Serafini; @Floettmann]. Both analytical and numerical design of a pepper-pot system are based on given beam parameters such as the rms beam size, divergence, emittance and energy. Therefore, naturally, the performance of pepper-pot measurements are optimised for this quasi-laminar regime. Under focusing conditions that does not fulfil emittance compensation, such as where the focal point of beam envelope occurs before or after the mask, (hence, an incoming beam is nonlaminar at the mask location) emittance measurements will include errors due to an underestimated beam divergence and beam size. Furthermore, the pepper-pot method analysis algorithm given in Section \[algorithm\] assumes a Gaussian incident beam profile limiting the performance of the method to Gaussian, ideally quasi-laminar, beams. We tested various analytically suggested pepper-pot geometries using the above-mentioned numerical tool and investigated reconstructed emittance results under different solenoidal focusing in comparison to those retrieved from PARMELA simulations. As a result of these studies, we demonstrated that non-laminar beam envelopes or more specifically beams with un-compensated projected emittances might not be suitable for pepper-pot measurement.
Beam Characteristics {#characteristics}
====================
In general, a photo-injector operates in a space charge dominated regime, generating a high intensity beam which is still at low energy (in the order of a few MeVs) before acceleration in a booster linac. For such a beam, the significance of space charge defocusing was semi-analytically studied by comparing it to the outward pressure of the beam associated with the normalised beam emittance. The envelope equation in paraxial limit is given as [@Serafini], $$\sigma^{\prime\prime} + \sigma^{\prime}\frac{\gamma^{\prime}}{\beta^2\gamma}+K_r\sigma - \frac{\kappa_s}{\sigma\beta^3\gamma^3}-\frac{\varepsilon^2_n}{\sigma^3\beta^2\gamma^2} = 0,
\label{eqn:envelope}$$ where $\sigma$ is the cylindrical symmetric rms beam size under effects of an external linear focusing channel with strength $K_r$, $\beta$ is the normalised mean beam velocity, $\gamma$ is the normalised beam energy in the units of the rest mass energy, $\kappa_s$ is the beam perveance and $\varepsilon_n$ is the normalised transverse emittance of a beam slice. In this equation, the last two terms represent the defocusing due to space charge and beam emittance, respectively. Here, beam perveance is given by $\kappa_s = I/2I_0$ where $I=Q/(2\sigma_z \sqrt{2ln(2)})$ is the peak beam current for a Gaussian beam and $I_0$ is a constant, known as Alfven current (17kA). The ratio of these two terms determines the dominant defocusing factor. The second and third terms represent the focusing due to adiabatic damping and external focusing.
\
The PARMELA generated electromagnetic (by the RF gun and booster linac) and magnetic fields (by the solenoids) acting on the beam for an envelope matched to the linac are presented in Fig. \[fig:fields\]-a. The corresponding magnetic field profile in the figure has a peak value of approximately 2914$\,$G and this profile was scaled to achieve different amount of focusing around the RF gun. All results in this paper are presented for a 1$\,$nC beam reaching to an energy of 6.6$\,$MeV after the gun (and 17$\,$MeV after the linac which is not relevant at the mask location). Figure \[fig:fields\]-b shows how beam emittance and rms size evolves as a function of solenoid current at the location of the mask. According to this, beam emittance is minimised at 3029$\,$G slightly after the beam focus at 2971$\,$G. Four working points were chosen across this focusing range; one that represents an under-focused envelope (2857$\,$G), a point with an envelope optimised to minimise the emittance at the location of the mask (3029$\,$G), another which ensures the matching to the booster linac (2914$\,$G) and finally a point with an over-focused envelope (3086$\,$G). These beam conditions are used to study the performance of different pepper-pot mask designs for different envelope characteristics which are summarised in Table \[tbl:characteristics\].
\[tbl:characteristics\]
For a relativistic beam ($\beta\approx1$), apart from the constant normalised beam energy $\gamma$, and beam current $I$, the ratio of emittance and space charge defocusing scales proportional to the square of the beam emittance and inversely proportional to the square of the rms beam size. Hence the evolution of this ratio is mainly determined by the beam envelope. Figure \[fig:spacecharge\]-(a) shows that the ratio, $\varepsilon_n^2\beta\gamma / \sigma^2 \kappa_s$ evolves through the beamline for varying $\varepsilon_n$ and $\sigma$(s) (given in Fig. \[fig:spacecharge\]-(b)) as well as $\beta$(s) and $\gamma$(s) as beam undergoes focusing and acceleration. Here, ‘s’ is the curvilinear coordinate following the beam trajectory. In Fig. \[fig:spacecharge\]-(a), the vertical dashed lines comprise the regions of acceleration due to the RF gun and the booster linac. In these regions the emittance oscillations are visible as expected due to the time varying nature of the RF fields and envelope mismatch [@Serafini], hence, the amplitude of these oscillations depends on solenoidal focusing as well as bunch charge (strength of the space charge force, not studies here). The horizontal red dashed line indicates the value where the space charge defocusing and intrinsic beam emittance are equal. Consequently, beam configurations below this point lead to space charge dominated beams. The solid black line marks the location of the emittance measurement (location of the mask at 132$\,$cm).
\
\
Figure \[fig:spacecharge\]-(a) shows that for all the studied focusing conditions the ratio $\epsilon^2_n \beta \gamma / \sigma^2 \kappa_s$ remains less than one throughout the majority of the beamline (below the red line marking the space charge limit). The evolution of a beam envelope accelerated under such conditions will therefore remain dominated by space charge forces. Consequently, to measure only the intrinsic beam emittance, a method to remove the space charge contribution to the evolving beam envelope is required. A common technique is the pepper-pot method, which uses a mask to isolate the intrinsic beam emittance, and captures the 4D transverse phase-space ($x$, $x^{\prime}$, $y$, $y^{\prime}$) of a beam in a single shot [@Zhang; @Anderson]. The beam envelopes and projected emittance evolution are presented in \[fig:spacecharge\]-(b) with corresponding solenoid field values. Emittance compensation occurs at 2914$\,$G minimising the emittance delivered after the linac [@Serafini; @Floettmann].
4D Phase Space Sampling {#sampling}
=======================
In previous section we show that for sufficiently intense beams at low energy, emittance should be measured under conditions where the defocusing due to space charge is eliminated and hence only defocusing due to beam emittance is detected. This is facilitated with a mask, comprised of either slits or holes in some high-Z material, and is designed to ensure that having passed through, each electron ensemble (beam-let) undergoes negligible space charge defocusing. In the case of a mask with slits, the measurement is only in one plane (x-x$^{\prime}$ or y-y$^{\prime}$ depending on the orientation of the slits) however utilising a mask with holes, which is called a “pepper-pot” mask, allows four dimensional (x-x$^{\prime}$ and y-y$^{\prime}$ simultaneously) single shot measurement of the transverse emittance.
Emittance Analysis Algorithm for Pepper-pot Method {#algorithm}
--------------------------------------------------
This method suggests to split a space charge dominated beam into beam-lets, by using a mask with holes arranged in a rectangular matrix, so that each beam-let carries an amount of charge to pose no significant space charge defocusing [@Zhang; @Anderson]. After a certain propagation through a drift section, these beam-lets can be observed on a fluorescent or optical transition radiation screen located downstream of the mask as sketched in Fig. \[fig:ppot\]. Intensity projection of the beam-lets on either axis are used to calculate each term in the rms emittance equation as a weighted sum over the relative intensities of individual beam-lets as shown from Eq. \[eqn:emitt\_formulae\] to Eq. \[eqn:xxprime\] [@Anderson].
![The working principle of the pepper-pot emittance measurement.[]{data-label="fig:ppot"}](ppt_concept.png){width="40.00000%"}
$$\varepsilon_x = \sqrt{\langle x^2 \rangle \langle x^{\prime 2} \rangle - \langle xx^{\prime} \rangle ^2}
\label{eqn:emitt_formulae}$$
$$\langle x^2 \rangle = \frac{\sum_{i=1}^N \rho_i x_{i,c}^2}{\sum_{i=1}^N \rho_i }
\label{eqn:xsquare}$$
$$\langle x^{\prime 2} \rangle = \frac{ \sum_{i=1}^N \rho_i (x^{\prime 2}_{i,c} + \sigma_i^{\prime 2}) }{\sum_{i=1}^N \rho_i }
\label{eqn:x2prime}$$
$$\langle xx^{\prime} \rangle = \frac{ \sum_{i=1}^N \rho_i x_{i,c} x_{i,c}^{\prime}}{\sum_{i=1}^N \rho_i }
\label{eqn:xxprime}$$
In the above equations, $c$ in the indices denotes that the values are with respect to the centroid of the beam-lets (in the analysis code centroid is defined as the beam-let with the largest intensity); $\rho_i$ is the measured intensity of the $i^{th}$ beam-let; $\sigma^{\prime}_i = \sigma_i/L$ is the spread in the divergence of the $i^{th}$ beam-let due to the finite beamlet width ($\sigma_i$); and $x^{\prime}_i$ is the divergence of the $i^{th}$ beam-let calculated by correlating the hole locations and beam-let locations on the screen as $\langle x_i - id \rangle / L$, where d is the distance between the holes.
Mask Design {#design}
-----------
The basic design criteria for a pepper-pot system are formed considering the strength of space charge force present, divergence and rms size of the beam, and the angular resolution of the system [@Piot]. After the mask beam-lets should carry a fraction of the initial charge to prevent them from exerting any space charge force. The ratio between the space charge and the beam emittance, $R^{\prime}$, can be written in terms of the mask geometry, i.e., $\omega$ (hole diameter), $d$ (centre-to-centre distance between the holes) and $L$ (mask-screen distance) (Eq. \[eqn:rprime\]) [@Anderson]. f $$R^{\prime} = \frac{2I}{\gamma^2 I_0 \varepsilon_n} \frac{\omega L}{d}
\label{eqn:rprime}$$
The thickness of the mask is chosen so that either all particles outside the hole apertures of the mask are completely absorbed or they are scattered to form a distinctively separate signal to that originated from the beam-lets. To eliminate electrons diffusing the mask, one should chose a mask thickness of at least one radiation length. The radiation length, $L_s$, of the material for a given incoming beam energy, $E$, is a guideline to determine mask thickness where $\rho$ is the density of mask material (19.25$\,g/cm^3$ for Tungsten). It is calculated using the stopping power regarding the target material, using the Bethe-Bloch equation, $dE/dx$, and is given in practical units as in Eq. \[eqn:Ls\] [@Anderson]. $$L_s = \frac{E}{\frac{dE}{dx}} \approx \frac{E(MeV)}{1.5(MeV\,cm^2\,g^{-1})\rho(g\,cm^{-3})}
\label{eqn:Ls}$$
When the expected operational intensity is high enough to not raise any concern on collecting enough light for imaging, then the mask can be made one radiation length so that no background due to electrons diffusing is generated. On the other hand, the aspect ratio of the mask (ratio of hole sizes to mask thickness) can be challenging to achieve for very fine matrix of holes for thick masks. The most common machining technique is the laser drilling which is limited about 200$\,\mu$m for the achievable distance between holes and aspect ratio of the mask, i.e., the ratio between the mask thickness and the hole diameter (generally up to 110$\%$ is possible). Electrical discharge machining is another existing technique with finer machining capabilities for a larger cost and can easily overcome the machining limitations mentioned above. When there are machining limitations, a pepper-pot mask thinner than the radiation length can provide more flexibility. In this case, electrons masked by Tungsten slab will be scattered rather than absorbed. Such scattered electrons produce a generally Gaussian distributed background on top of electrons propagating through the holes which can be differentiated from this beam-let distribution with offline background subtraction.
Once the beam-lets propagate through the mask, they travel through a certain drift length before reaching the screen. Depending on the beam divergence, $\sigma^{\prime}$, the observation screen should be distanced from the mask to prevent beam-lets from overlapping on the screen, namely, fulfilling the condition $4\sigma^{\prime}L < d$.
Finally, one can ensure that position and angle resolutions are comparable, $\sigma/d = L\sigma^{\prime}/r_d$, where $\sigma$ is the rms beam size and $r_d$ is the resolution of the detector (pixel size of the CCD camera) which is taken 10$\,\mu$m for this study.
An analytical initial mask design was performed under the focusing condition ensuring that the beam envelope is matched to the linac entrance. A range for mask geometries spanning from 100-200$\,\mu$m for the hole diameter, 100-1250$\,\mu$m for the centre-to-centre hole distance and 30-270$\,$mm for the mask-to-screen distance were explored in the light of the design criteria above. The results fulfilling $R^{\prime} \approx 1$ are summarised in Table \[tbl:ana\_mask\_design\]. Apart from the criteria summarised in the table, one should make sure that the angular aperture of a single hole on the mask, $\omega/4L_s$ is larger than the expected beam angle, $\varepsilon_n/\gamma \sigma$. For the parameters considered in this study, the average beam angle ranges from 0.5 to $<$1$\,$mrad while the angular aperture of the mask ranges from 10 to 22$\,$mrad depending on the hole diameter and assuming that the mask thickness is equal to the radiation length (2.3$\,$mm) of Tungsten for 6.6$\,$MeV electrons. Therefore the angular beam clearance is more than sufficient for the geometries considered.
\[tbl:ana\_mask\_design\]
In the table, the values for $4\sigma^{\prime}L$ suggests no beam-let overlap on the screen. However, the position resolution $\sigma/d$ values become an order of magnitude smaller than the angular resolution, $L\sigma^{\prime}/r_d$, towards the bottom of the table. This might imply one might not achieve enough number of beam-lets on the screen. These design points were tested for the beam distributions created by PARMELA for the focusing conditions given in Section \[characteristics\] using a custom tracking code introduced in the next section.
Tracking Simulations {#tracking}
--------------------
An initial Gaussian beam with 0.5$\,$mm radius and 4$\,$ps bunch length was tracked with PARMELA in the presence of previously mentioned RF and magnetic fields. The distributions at the location of the mask are retrieved from PARMELA and further tracked through a mask with a given geometry up to a screen located downstream with a MATLAB [@MATLAB] script. Once the distributions on the screen are extracted, after a polynomial background subtraction, each beam-let is processed to calculate the normalised emittance of the initial beam entering the mask using pepper-pot algorithm introduced in Section \[algorithm\]. One should note that, in a real life measurement the background can be more complicated due to external effects such as any ambient light, heating of the screen and x-rays due to the interaction of the electrons with Tungsten mask.
PARMELA simulates macro particles which are ensembles representing many real particles. A macro particle unfolding algorithm is included to provide a more realistic signal intensity for tracking. This algorithm creates a number of new particles in a Gaussian distribution within a certain radius around the mean position of each macro particle with the same divergence of this mother particle. In this study, each macro-particle is unfolded into 100 new particles within a radius of 10$\,\mu$m. The unfolding method smooths out the distributions to help with the tracking and analysis. Subsequent tests showed that the final normalised emittance result changes 1-2$\%$ between unfolding with 100 new particles and no unfolding.
Unfolded particles coinciding with the holes on the front face of the mask are considered as survived particles and kept for further analysis. Divergence of the beam is taken into account during the propagation through the mask thickness and the drift section up to the screen; a particle with a trajectory exceeding a hole aperture is considered as an absorbed particle and is removed from the rest of the calculations.
Results of beam tracking for four different focusing conditions are presented in Fig. \[fig:results\]. These include the transverse projections of initial distributions incoming at the mask location, distributions at the front and back face of the mask and the distributions at the screen. From Fig. \[fig:results\]-(a) to (d), distributions represent an under-focused envelope, an envelope satisfying the matching with linac, an envelope minimising the emittance at the mask and an over-focused envelope.
The transmission to the screen is found to be 10 to 50$\%$ depending on the number and size, namely the density, of the holes on the mask. The tracking model assumes a mask thickness of more than or equal to the radiation length of the material. Hence particles hitting the mask are considered as absorbed by the mask rather than propagating through, scattering from the mask material and hitting the screen. Under this assumption, 1$\%$ of the particles travelling through the holes are absorbed. In reference [@Cheymol] the scattering of particles at the edges of a slit is studied via Monte Carlo simulations. These results are used to investigate the effects of slit geometry as well as the mask material on the amount of scattered background particles reaching the observation screen in investigated. Results reported in this paper are based on a mask design with straight cylindrical holes.
\
Systematic Error Analysis
-------------------------
The position ($\sigma_{x_i}$) and intensity ($\sigma_{\rho_i}$) stability of the beam-lets can be approximated to the pointing and intensity stability of the laser, respectively, by ignoring any field errors or misalignment. The error on the angles is calculated using the beam-let widths and distance between the mask and the observation screen as in Eq. \[eqn:x2prime\], $\sigma^{\prime} = \sigma_i / L$.
Once this average systematic error is much smaller than the statistical error on the measurements, the diagnostics is deemed accurate within the given beam stability.
The systematic error on a function $f(x,y,z)$ is derived using the general expression given in Eq. \[eqn:sys\_err0\], $$\sigma^2_{f(x,y,z)} = \sum\bigg(\frac{\partial f(x,y,z)}{\partial i}\bigg)^2\sigma_i^2
\label{eqn:sys_err0}$$ where $\sigma_i=\sigma_x, \sigma_y, \sigma_z$ are the uncertainties on observables with $i=x,y,z$.
The systematic errors such as the position and angle resolution of the mask ($\sigma_x$,$\sigma_{x^{\prime}}$), as well as the intensity fluctuations of the beam ($\sigma_{\rho}$), were propagated through the variables of each term in the rms emittance calculation in previous section and are presented through Eq. \[eqn:sys\_err1\]-\[eqn:sys\_err2\].
$$\sigma^2_{\sum\rho x^2} = \sum^N_{i=1}x^4_i\sigma^2_{\rho_i} + 4x^2_i\rho^2_i\sigma^2_{x_i}
\label{eqn:sys_err1}$$
$$\sigma^2_{\sum \rho x^{\prime2}} = \sum^N_{i=1}x_i^{\prime4} \sigma^2_{\rho_i}+4x^2_i\rho^2_i\sigma^2_{x_i}$$
$$\sigma^2_{\sum\rho x x^{\prime}} = \sum^N_{i=1}\rho^2_ix_i^{\prime}\sigma^2_{x_i} + \rho^2_ix^2_i\sigma^2_{x_i^{\prime}} + x_i^2 x_i^{\prime 2}\sigma_{\rho_i}^2$$
$$\sigma^2_{(\sum\rho)^2} = 4\bigg( \sum_{i=1}^N \sigma_{\rho_i}^2 \bigg) \bigg( \sum_{i=1}^N \rho_i \bigg)^2
\label{eqn:sys_err2}$$
Finally, errors associated with each term were combined to provide the systematic error on the emittance measurement. This is shown in Eq. \[eqn:sys\_error\] in terms of the moments and intensities of the beam-lets as well as the individual errors on these observables.
$$\sigma^2_{\varepsilon} = \sigma^2_{(\sum\rho x^2 \sum\rho x^{\prime 2}-(\sum\rho x x^{\prime})^2)/\sum \rho}$$
$$= \frac{(\rho_ix_i^{\prime 2})^2(4\rho_i^2x_i^2\sigma^2_{x_i}+x_i^4\sigma^2_{\rho_i})}{4\varepsilon^2(\sum^N_{i=1}\rho_i)^4}$$
$$+\frac{(\rho_i^2 x_i^2)^2 (4\rho_i^2 x_i^{\prime 2} \sigma^2_{x_i^{\prime}} + x_i^{\prime 4} \sigma^2_{\rho_i})}{4\varepsilon^2(\sum^N_{i=1}\rho_i)^4}$$
$$+\frac{(\rho_i^2 x_i^{\prime 2}\sigma^2_{x_i} + \rho_i^2 x_i^2 \sigma^2_{x_i^{\prime}} + x_i^2 x_i^{\prime 2} \sigma^2_{\rho_i})(\rho_i x_i x_i^{\prime})^2}{\varepsilon^2(\sum^N_{i=1}\rho_i)^4}$$
$$-\frac{2(\sum^N_{i=1}\rho_i x_i^2)(\sum^N_{i=1}\rho_i x_i^{\prime 2})(\sum^N_{i=1}\rho_i x_i x_i^{\prime})^2(\sum^N_{i=1} \sigma^2_{\rho_i})}{\varepsilon^2(\sum^N_{i=1}\rho_i)^6}$$
$$+\frac{\varepsilon^2 (\sum^N_{i=1} \sigma^2_{\rho_i})}{(\sum^N_{i=1}\rho_i)^6}
\label{eqn:sys_error}$$
One should also note that a correction can be introduced on the angular spread of the beam-lets for beams incident on the mask with large divergences. This is generally a case for electron beams generated through laser-plasma interaction and causes a distortion in beam-let profile during the propagation between the mask and screen which introducing a correlated divergence. The correction used to remove this correlated divergence is given in Eq. \[eqn:correction\] and the concept is explained and sketched in detail in [@ppt_correction]. Note that the notation in the reference is different to the one in this paper regarding the rms angular spread due to emittance ($\sigma^{\prime}_i$, where i refers to the i$^{th}$ beam-let), the measured rms beam-let width ($\sigma_i$), hole width ($\omega$) and distance between the mask and the screen ($L$).
$$\sigma_i^{\prime2} = \frac{\sigma_i^2 - (M\omega/\sqrt{12})^2}{L^2}
\label{eqn:correction}$$
where M is the magnification ratio defined as $M=(L_g+L)/L_g$. Here $L_g$ is the distance from the electron source where electrons originate to the mask. The factor $1/\sqrt{12}$ is introduced to provide the rms value of the flattop distribution created by the holes.
\
\
Performance and Limitations of Emittance Measurement System
===========================================================
The mask geometries suggested by the analytical calculations were tested using the tracking algorithm through a rigorous iterative process as a function of hole diameter and centre-to-centre hole distances on the mask as well as the distance between the mask and the screen. The mask geometries producing closest results to those retrieved from the PARMELA simulations are presented in Table \[tbl:mask\_design\] in bold characters.
In the weakly focused region (under-focused envelope), the reconstructed emittance values are overestimated by 40$\%$ in average. A typical example of an under-focused distribution can be seen in Fig. \[fig:results\]-(a). When under-focused, the projected emittance under space charge force is not compensated (by aligning the phase space angles due to slices). This leads to emittance growth and halo formation, creating high intensity tails at the beam projection at the screen [@Wangler]. Furthermore, in the algorithm, given through Eq. \[eqn:xsquare\] to Eq. \[eqn:xxprime\], rms values are calculated as a weighted sum with regards to the relative intensities of the beam-lets. This is compatible with a general case where the beam is Gaussian and outer particles contribute to the rms emittance less than the particles occupying the core of the beam. Therefore, for non-Gaussian beams the tail particles contribute to the rms emittance as much as the core particles causing an overestimation of the reconstructed emittance through these method. Although the over-focused distributions considered for this study preserve their Gaussian form, by the same token, they are prone to the same problem depending on the level of over-focusing.
This further investigation shows that pepper-pot method is mostly reliant on the incident beam distribution and it works best for beams focused on or at the vicinity of the mask, i.e., beams incident on the mask with a quasi-laminar envelope. The indications of a similar behaviour was observed during the commissioning of PHIN photo-injector which utilised a multi-slit set-up [@CLIC], however the behaviour was not clearly noticeable as measurements were not sufficiently extended outside the matched region.
\[tbl:mask\_design\]
\[tbl:mask\_design2\]
Among all the geometries considered, there found to be a particular geometry works for a certain envelope rather than a single design which can measure under a large range of focusing conditions.
Examples of the resulting distributions on the screen are presented in Fig. \[fig:tracking\]-(a), (b) and (c). The corresponding projections are shown in Fig. \[fig:tracking\]-(d), (e) and (f) with a Gaussian curve fit to each beam-let in order to determine their mean position and widths. Emittance values are calculated using these moments and relative intensities of the beam-lets. The phase space distributions are reconstructed using divergences calculated from beam-let position offsets from the hole positions and intensity distributions of the beam-lets. These are presented in Fig. \[fig:tracking\]-(g), (h) and (i).
The analytical criteria were reapplied for these optimised geometries as summarised on Table \[tbl:mask\_design2\]. Even if hole diameters as small as 100$\,\mu$m considered in the initial geometries it was seen that the transmission was only a few percent proving the analysis difficult, therefore final solutions converged towards larger hole diameters. The choice of larger holes led to increase in $R^{\prime}$ values. This can be compensated by a trade off between $d$ and $L$. The choice of increasing the centre-to-centre distance between the beam-lets is limited by the number of beam-lets generated, especially in the focused distributions, where the beam size is smaller. On the other hand, the distance between the mask and the screen should be chosen carefully to minimise the overlapping.
Conclusions
===========
Apart from analytical guidelines, pepper-pot set-up requires extensive iterative numerical optimisation. A MATLAB script is developed in order to track the particle distributions, retrieved from PARMELA, through a mask with given geometry and up to a downstream screen. The script then uses the projections of the distributions at the plane, where the screen is located, to calculate the transverse emittance and reconstruct the phase space.
During the tests on beam distributions under different solenoid fields, it was observed that for under-focused (and potentially over-focused) envelopes, pepper-pot algorithm is unable to produce retrieved results from PARMELA. This is due to the contribution from the tails of the projections when the reconstruction algorithm is used for non-Gaussian beams. This is the case when the beam emittance is not compensated, allowing emittance growth due to space charge and associated halo formation which then creates high intensity tails in the projection of the transverse plane. Therefore, the method was found to be effective for Gaussian beams under solenoid focusing close to the conditions allowing emittance compensation or matching to the linac. Furthermore, a design allowing for more than two beam-lets (a restriction especially at the focal point) is essential for statistical significance.
After demonstrating the sensitivity of pepper-pot measurement system design to incoming beam parameters, for versatile operation, we suggest the concept of a multi-region mask housing different geometries for different operational conditions.
As a merit of performance for pepper-pot technique, an expression for the systematic error is presented using the rms emittance formula and terms resulting from the pepper-pot algorithm. Once this value is smaller than the statistical deviations in the measurements, the mask design is deemed reliable for measurement precision within the given beam stability.
Acknowledgment
==============
This work was supported by the Cockcroft Institute Core Grant and the STFC under the project reference number ST/N00163X/1. Authors would like to thank to A. Bechold and A. Bien (NTG); D. Potkins, M. Dehnel and M. Melanson (D-PACE) and A. Palmer and A. Woolten (STFC-UKRI) for discussions on pepper-pot mask machining.
References
==========
[99]{} M. Zhang, FERMILAB-TM-1988 (1996). S. G. Anderson, J. B. Rosenzweig, G. P. LeSage, and J. K. Crane, Phys. Rev. ST Accel. Beams 5, 014201 (2002). R. Ganter, B. Beutner, S. Binder, H. H. Braun, T. Garvey, C. Gough, C. Hauri, R. Ischebeck, S. Ivkovic, F. Le Pimpec, K. Li, M. L. Paraliev, M. Pedrozzi, T. Schietinger, B. Steffen, A. Trisorio, and A. Wrulich, Physical Review Special Topics - Accelerators and Beams 13, 093502 (2010). A. Cianchi, L. Catani, M. Boscolo, M.Castellano, A. Clozza, G. Di Pirro, M. Ferrario, D. Filippetto, V. Fusco, EPAC Proceedings p.2622 (2004). J.Rosenzweig, N.Barov, S.Hartman, M.Hogan, S.Park, C.Pellegrini, G.Travish, R.Zhang, P.Davis, G.Hairapetian, C.Joshi, Nuclear Instruments and Methods 341, 379-385 (1994). R. Thurman-Keup, A. S. Johnson, A. H. Lumpkin, J. Ruan, FERMILAB-CONF-11-137-AD (2011). O. Mete, E. Chevallay, M. Csatari, A. Dabrowski, S. Doebert, D. Egger, V. Fedosseev, M. Olvegaard and M. Petrarca, PRAB 15, 022803 (2012). K. Pepitone, S. Doebert, R. Apsimon, J. Bauche, M. Bernardini, C. Bracco, G. Burt, A. Chauchet, E. Chevallay, N. Chritin, S. Curt, H. Damerau, M. Dayyani Kelisani, C. Delory, V. Fedosseev, F. Friebel, F. Galleazzi, I. Gorgisyan, E. Gschwendtner, J. Hansen, L. Jensena, F. Keeblee, L. Maricalvaa, S. Mazzonia, G. McMonaglea, O. Mete, A. Pardons, C. Pasquino, V. Verzilov, J.S. Schmidt, L. Soby, B. Williamson, E. Yamakawa, S. Pitman, J. Mitchell, Nuclear Instruments and Methods A, 909, 102-106 (2018). P. A. McIntosh, D. Angal-Kalinin, N. Bliss, R. Buckley, S. Buckley, J. A. Clarke, P. Corlett, G. Cox, G. P. Diakun, B. Fell, A. Gleeson, A. Goulden, C. Hill, F. Jackson, S. P. Jamison, J. Jones, T. Jones, A. Kalinin, B. P. M. Liggins, L. Ma, B. Martlew, J. McKenzie, K. Middleman, B. Militsyn, A. Moss, T. C. Q. Noakes, K. Robertson, M. Roper, Y. M. Saveliev, B. Shepherd, R. J. Smith, S. L. Smith, T. Thakker and A. Wheelhouse, IPAC Proceedings, THPWA036 (2013). S. M. Wiggins, M. P. Anania, G. H. Welsh, E. Brunetti, S. Cipiccia, P. A. Grant, D. Reboredo, G. G. Manahan, D. W. Grant and D. A. Jaroszynski, Proc. of SPIE Vol. 9509 95090K-1 (2019). J. G. Power, M. E. Conde, W. Gai, F. Gao, R. Konecny, W. Liu, and Z. Yusof, PAC Proceedings, FRPMN117 (2007). L. Young and J. Billen, Proceedings of PAC 2003 p3521 (2003). R. Roux, G. Bienvenu, C. Prevost, B. Mercier, CARE Note-2004-034-PHIN (2004). O. Apsimon, R. Apsimon, Graeme Burt, G. Xia, S. Doebert, Proceedings of LINAC, TUPRC016 (2016). L. Serafini and J. B. Rosenzweig, Phys. Rev. E 55, 7565 (1997). K. Floettmann, Phys. Rev. Accel. Beams 20, 013401 (2017). P. Piot, J. Song, R. Li, G. A. Krafft, D. Kehne, K. Jorden, E. Feldl and J.-C. Denard, Proc. of PAC97 p.2204 (1997). https://ch.mathworks.com/products/matlab.html. B. Cheymol, E. Bravin, C. Dutriat, T. Lefevre, Proceedings of DIPAC09, TUPD42 (2009). C. M. Sears et al., PRSTAB 13, 092803 (2010). T. P. Wangler, AIP Conference Proceedings 253, 21 (1992).
[^1]: Corresponding author, email: oznur.mete@manchester.ac.uk
[^2]: Accepted by Nuclear Inst. and Methods in Physics Research, A for publication. DOI:10.1016/j.nima.2019.162485
|
---
abstract: 'Coarsening kinetics is usually described using a linear gradient approximation for the underlying interface migration (IM) rates, wherein the migration fluxes at the interfaces vary linearly with the driving force. Recent experimental studies have shown that coarsening of nanocrystalline interface microstructures is unexpectedly stable compared to conventional parabolic coarsening kinetics. Here, we show that during early stage coarsening of these microstructures, IM rates can develop a non-linear dependence on the driving force, the mean interface curvature. We derive the modified mean field law for coarsening kinetics. Molecular dynamics simulations of individual grain boundaries reveal a sub-linear curvature dependence of IM rates, suggesting an intrinsic origin for the slow coarsening kinetics observed in polycrystalline metals.'
author:
- 'M. Upmanyu'
- 'P. A. Martin'
- 'A. D. Rollett'
title: 'Effect of non-linear interface kinetics on coarsening phenomena'
---
Material properties of most inorganic polycrystals depend on the underlying interfacial microstructure, in particular the grain size. The positive free energy of the interfaces provides a universal driving force for grain coarsening such that a curved interface moves towards its center of curvature in order to decrease the total system energy. A quantitative understanding of the curvature dependence of the interface migration (IM) rate is central to much of material processing as it determines the overall microstructure coarsening kinetics.
In systems where interface motion is activated, the atomic-exchange across the interface determines the IM rate, $v$. Absolute reaction rate theory based on such atomic hopping events yields the dependence of the IM rate on the driving force $p$ [@gbm:Turnbull:1951], $$\label{eq:subLinearComplete}
v = bN\nu_o e^{-\beta \Delta G_A} \left(1 - e^{-\beta\Delta g}\right),$$ where $b$ is the interface displacement per event, $N$ is the number of event sites, $\nu_o$ is a jump frequency characteristic of the underlying lattice and $\Delta G_A = \Delta Q_A - T\Delta S_A$ is the activation barrier for each hopping event. System free energy change per event $\Delta g$ is the energy provided by the driving pressure $p$ to effect the atom-exchange, i.e. $\Delta g = p \omega_m$, where $\omega_m$ is the activation volume associated with each event. At high temperatures, $\beta p\omega_m \ll 1$ and Eq. \[eq:subLinearComplete\] can be linearized to express the IM rate in terms of the driving force, $v \approx M p$. The constant of proportionality $M$ is the interface mobility, which is predominantly Arrhenius with temperature $$\label{eq:linearRelations}
M = M_o e^{-\beta\Delta Q_A}\;\text{and}\;M_o = \beta bN\nu_o\omega_m\;e^{\beta\Delta S_A}.$$ During coarsening, the capillary driving force on each interface segment is the product of the interface stiffness $\Gamma$ and its mean curvature $\kappa$, or the weighted mean curvature $\kappa_\gamma = \Gamma\kappa$ [@cg:Taylor:1992b]. The IM rate now increases with the mean interface curvature [@cg:Taylor:1992; @book:SuttonBalluffi:1995; @gbm:Upmanyu:1998a]. The curvature dependence implies that material systems with ultrafine/nanocrystalline (nc) grain sizes possess an inherently high driving force for coarsening. This is often an unwanted outcome during thermal annealing, as the qualitatively superior thermomechanical and transport properties of these microstructures are offset by their instability with respect to coarsening.
However, a poorly understood feature of nanocrystalline microstructures is that the coarsening is anomalously slower than expected, almost linear in time, before it transitions to the conventional parabolic growth [@nano:Bonetti:1999; @nano:Krill:2001]. This behavior has been attributed to grain size dependent extrinsic effects on interface motion, such as enhanced vacancy and triple junction drag [@nano:Krill:2001; @gbm:Estrin:2000; @gbm:Upmanyu:1998b; @tjm:Upmanyu:1999], solute segregation [@imdrag:Michels:1999] and particle incidence. In this article, we present an alternative framework for coarsening behavior based on intrinsically non-linear IM rates, and discuss the implications of this behavior for the stability of nanocrystalline microstructures.
The rationale is the observation that capillary driving pressures during early stages of coarsening in nanocrystalline microstructures are large enough such that the linearized rate theory begins to break down. This is evident from Table \[tab:criticalGrainSizes\], a list of the critical driving forces $p_{cr}$ in various polycrystalline metals at which $\beta p_{cr}\omega_m = 0.1$. The definition corresponds to a processing temperature $T=0.7\,T_m$ and assumes single-atom hops across the interface, $\omega_m=\Omega$ ($T_m$ is the bulk melting point and $\Omega$ is the atomic volume). Since the atomic-scale mechanism can in general involve more than one atom, as in correlated or military atom transfers across the interface [@book:SuttonBalluffi:1995; @gbm:Upmanyu:2004], $\omega_m\ge\Omega$ and the reported values of $p_{cr}$ are upper bounds.
\[thdp\]
[**Metal system**]{} [**Volume**]{} $\omega_m$ $\rm {(\AA)}^3$ $p_{cr}$ [**(MPa)**]{} $\bar{R}_{cr}$ [**(nm)**]{}
---------------------- ------------------------------------------- ------------------------ -----------------------------
Aluminum 16.6 54.4 25.1
Copper 11.8 87.7 51.5
Nickel 10.9 153.2 18.3
Lead 30.3 19.1 146.4
Gold 17.1 76.1 36.8
Silver 16.9 70.8 39.6
: Critical driving forces for interface migration and critical grain radii in polycrystalline metals at which linearized reaction rate theory breaks down. The activation volume for IM is assumed to be the atomic volume, $\omega_m=\Omega$. \[tab:criticalGrainSizes\]
The critical driving force can also be used to define a critical grain size, $\bar{R}_{cr}$, as the average grain size in polycrystals is a measure of the capillary driving force per interface segment. The mean field relation takes the form $\kappa_\gamma=\Gamma B/\bar{R}$, where the topological parameter $B$ captures the effect of the interface network [@gg:Mullins:1956]. Table 1 also lists the lower bound for the associated critical grain sizes below which the non-linearities become important. The interface stiffness is assumed to $\Gamma=0.5$J/m$^2$, while the topological parameter is based on Potts model grain growth simulation-based comparison between shrinking of an embedded sphere ($B=4\pi$) and a 3D polycrystal, $B\sim2.8$ \[ADR, to be published\]. These critical grain sizes lie well within the range of those found in nanocrystalline microstructures in these metals, emphasizing the role of non-linear interface migration kinetics in determining the overall coarsening kinetics and therefore their thermo-mechanical stability.
The extension of the reaction rate framework to non-linear IM is not straightforward. The activation barrier is now comparable to the driving force, the bias in the energy landscape (inset, Fig \[fig:subsuper-linearMigrationRateVsRbya\]). As a result, the nature of the bias becomes important. While several minimum energy paths (MEP) are possible depending on the effect of the driving force on the initial, activated and final states, we limit ourselves to two extreme scenarios : a) the forward activation barrier is unaltered while the final state is lowered by the applied bias, or b) the bias is distributed equally between the initial and final states such that the forward barrier is altered. In general, the energy of the activated state in the resultant MEP is also changed due to the driving force. Making the simplifying assumption that this energy change is small, Eq. \[eq:subLinearComplete\] can be generalized as $$\label{eq:subSuperLinearComplete}
v= A \left[e^{\xi \beta p\omega_m} - e^{(\xi-1)\beta p\omega_m}\right],$$ where $A=\beta M/\omega_m$ [^1]. Equation \[eq:subSuperLinearComplete\] is similar in form to the Butler-Volmer equation in electrokinetics for transfer of charged species [@book:BardFaulkner:1980].
The parameter $\xi$ captures the combined effect of the degree of symmetry in the energy landscape imposed by the driving force, and the change in energy of the activated state. The two extreme cases correspond to $\xi=0$ and $\xi=0.5$. The $\xi=0$ scenario can arise if the driving force is distributed asymmetrically such that its effect is restricted to the final state. The scenarios $0<\xi\le0.5$ result in a decrease in effective forward barrier with increasing driving force. For $\xi=0.5$, the bias is distributed symmetrically about initial and final states such that the forward activation barrier is $\Delta \tilde{G}_A=\Delta G_A - (\Delta g)/2$ (see inset Fig. \[fig:subsuper-linearMigrationRateVsRbya\]). Changes in the activated state can be readily absorbed into an equation of this form. For example, if the MEP for the $\xi=0.5$ scenario is modified such that the activated state also increases by an equal amount $(\Delta g/2)$, we recover the $\xi=0$ scenario.
\[htb\] ![\[fig:subsuper-linearMigrationRateVsRbya\] (color online). Normalized migration rate as a function of reduced grain size $\bar{r}=\bar{R}/a$, for linear (circles), sub-linear (Eq. \[eq:GGEquationComplete\] with $\xi=0$, squares) and super-linear (Eq. \[eq:GGEquationComplete\] with $\xi=0.5$, diamonds) driving force dependence of IM rates. (inset) Schematic of the energy landscapes for in the absence and presence of a driving force, for the two values of $\xi$.](fig1.eps "fig:"){width="\columnwidth"}
Limiting our analysis to the two scenarios, the IM rates $v(\xi)$ can be expressed as $$%\label{eq:subSuperLinearExpSinh}
v(0) = A (1 - e^{-\beta p\omega_m})\;\textrm{and}\; v(0.5) = 2A\sinh(-\beta p\omega_m/2).$$ In the limit $\beta p\omega_m\gg1$, $v$ becomes independent of $\xi$, justifying the linearization (Eq. \[eq:subLinearComplete\]) at small driving forces. In the mean field limit, the IM rate depends on the grain size, $v=d\bar{R}/dt$ and we arrive at the relation between the form of the energy landscape and the grain size evolution, $$\begin{aligned}
\label{eq:GGEquationComplete}
a\frac{d\bar{r}}{dt} &= A \left[e^{\xi \beta p\omega_m}- e^{(\xi-1)\beta p\omega_m}\right].
%&= A\left [ \frac{1}{\bar{r}} + \sum_{k=2}^\infty\frac{1}{k!}\frac{\xi^k-(\xi-1)^k}{\bar{r}^k} \right ].
%& = A \left [ \exp\left(\frac{\xi_p}{\bar{r}}\right) - \exp\left(-\frac{(1-\xi_p)}{\bar{r}}\right) \right ]\nonumber\\ \end{aligned}$$ Here $\bar{r}=\bar{R}/a$ is the dimensionless grain size. The normalization factor $a=B\Gamma\beta\omega_m$ is a fundamental microstructural length scale that is related to the critical grain size, $\bar{R}_{cr}=0.1 a$. Figure \[fig:subsuper-linearMigrationRateVsRbya\] shows the IM rates predicted by Eq. \[eq:GGEquationComplete\] as a function of reduced grain size for the two scenarios. The linear approximation is also plotted for comparison. At small grain sizes such that $\bar{r}\rightarrow1$, the variation of IM rate is significantly non-linear with the driving force. The linearized relation is an overestimate (sub-linear IM rate) or an underestimate (super-linear IM rate) depending on the value of $\xi$, underscoring the role of the nature of the bias in the energy landscape.
The non-linearities in IM rates will modify the overall coarsening kinetics. For an isotropic interfacial microstructure with an initial grain size $\bar{r}_i$, the mean field governing equation for the final grain size $r_{f(\xi)}$ is $$\frac{At}{a} = \int_{\bar{r}_i}^{\bar{r}_{f(\xi)}} \frac{d\bar{r}} {\left(e\,^{\xi/\bar{r}}- e^{(\xi-1)/\bar{r}}\right)}.$$ Putting $y=1/\bar{r}$, with $Y_{f(\xi)}=(1/\bar{r}_{f(\xi)})$ and $Y_i=(1/\bar{r}_i)$, $$\begin{aligned}
\label{eq:GGIntegralChangeOfVariable}
\frac{At}{a}=-\int_{Y_i}^{Y_{f(\xi)}} \frac{y\,e^{(1-\xi) y}}{e^{y}-1}\,\frac{dy}{y^3}.
%=-\int_{Y_i}^{Y_{f(\xi_p)}} \frac{dy}{y^2\{e^{\xi_p y}-e^{(\xi_p-1)y}\}}
%=-\int_{Y_i}^{Y_{f(\xi_p)}} \frac{e^{-\xi_p y}\,dy}{y^2(1-e^{-y})}\nonumber\\
% Only last equation shown, intermediate steps commented out.\end{aligned}$$ This form is convenient for the following standard Maclaurin expansion (Ref. [@book:AbramowitzStegun:1965], p. 804): $$\frac{y\,e^{\lambda y}}{e^y-1}=\sum_{m=0}^\infty B_m(\lambda)\,\frac{y^m}{m!}\;.$$ Here, $B_m(\lambda)$ are the *Bernoulli polynomials*. They are tabulated (Ref. [@book:AbramowitzStegun:1965], p. 809); for example, $$\begin{aligned}
B_0(\lambda)=1,\; B_1(\lambda)=-\tfrac{1}{2}+\lambda\;\mbox{and}\;B_2(\lambda)=\tfrac{1}{6}-\lambda+\lambda^2\;.\nonumber\end{aligned}$$ Also, $B_m(1-\lambda)=(-1)^mB_m(\lambda)$. Thus, the integral in Eq. \[eq:GGIntegralChangeOfVariable\] can be written as $$\sum_{m=0}^\infty \frac{(-1)^mB_m(\xi)}{m!}\int_{Y_{f(\xi)}}^{Y_i} y^{m-3}\,dy\;,$$ and the grain growth equation becomes
$$\frac{At}{a}=\left [\frac{1}{2y^2} - \frac{B_1(\xi)}{y} + \frac{B_2(\xi)}{2}\,\ln{y}-\sum_{m=3}^\infty C_m(\xi)\,y^{m-2}\right ]_{Y_i}^{Y_{f(\xi)}},\;\textrm{with}\;C_m(\xi)=\frac{(-1)^m B_m(\xi)}{(m-2)m!}\;.$$
Remembering $\bar{r}$=$(1/y)$, we arrive at the coarsening law $$\begin{aligned}
\label{eq:GGEquationExactSolution}
\frac{2At}{a} = & \,h\left[\bar{r}_{f(\xi)}\right] - h\left[\bar{r}_i\right]\;,\textrm{with}\\
h(\bar{r}) = & \,\bar{r}^2 - 2B_1(\xi)\bar{r} + B_2(\xi)\,\ln{\bar{r}}-2 \sum_{m=3}^\infty \frac{C_m(\xi)}{\bar{r}^{m-2}}.\nonumber\end{aligned}$$
\[h!tb\] ![\[fig:subsuper-linearGGKinetics\] (color online). Reduced grain size $\bar{r}_f$ vs. normalized time $2A(t/a)$ predicted by Eq. \[eq:GGEquationExactSolution\] for initial grain size $\bar{r}_i=1$. Circles, squares and diamonds represent linear, sub-linear ($\xi = 0$) and super-linear ($\xi = 0.5$) IM rates-based coarsening, respectively. Inset $A$ shows an enlarged view for $1 < \bar{r}_f < 2$. Inset $B$ shows the variation for $\bar{r}_i = 0.1$ and $0.1 < \bar{r}_f < 1$.](fig2.eps "fig:"){width="\columnwidth"}
At high temperatures and large grain sizes $\bar{r}\gg1$, Eq. \[eq:GGEquationComplete\] can be linearized and we recover a linear relation between IM rates and the driving force, $d\bar{r}/dt \approx A/a\bar{r}$. The resultant governing equation for coarsening kinetics is, $$\begin{aligned}
\label{eq:parabolicGrowthLaw}
\frac{2At}{a} = \int_{\bar{r}_i}^{\bar{r}_f(p)}\bar{r} d\bar{r} = \bar{r}_{f(p)}^2 - \bar{r}_i^2,\end{aligned}$$ where $\bar{r}_{f(p)}$ is the final grain size. As expected, when $\bar{r}_{f(p)}\gg\bar{r}_i$, $\bar{r}_{f(p)} \propto \sqrt{t}$. Combining Eqs. \[eq:GGEquationExactSolution\] and \[eq:parabolicGrowthLaw\] yields the difference in grain size due to non-linear IM rates, $$\begin{aligned}
&\bar{r}_{f(0)}^2 - \bar{r}_{f(p)}^2 \approx - \left\{ \left[\bar{r}_{f(0)}-\bar{r}_i \right] + \frac{1}{6}\ln\left[\frac{\bar{r}_{f(0)}}{\bar{r}_i}\right]\right\},\;\mbox{and}\nonumber\\
&\bar{r}_{f(0.5)}^2 - \bar{r}_{f(p)}^2 \approx \frac{1}{12}\ln\left[\frac{\bar{r}_{f(0.5)}}{\bar{r}_i}\right]\nonumber.\end{aligned}$$
Coarsening kinetics described by Eq. \[eq:GGEquationExactSolution\] is shown in Fig. \[fig:subsuper-linearGGKinetics\], a plot of $\bar{r}_{f(0)}$ and $\bar{r}_{f(0.5)}$ against $2A(t/a) = h[\bar{r}_{f(\xi)}] - h[1]$, for $\bar{r}_i = 1$. Contribution of $O(\bar{r}^{-2})$ and higher terms is negligible and ignored. Grain size predicted by parabolic coarsening kinetics due to linear IM rates, $\bar{r}_{f(p)}$, is also shown for comparison. For sub-linear IM rates ($\xi = 0$), the linear term dominates during early stage coarsening. The negative deviation from classical parabolic coarsening kinetics increases linearly with grain size. We approach parabolic coarsening kinetics for $\bar{r}_{f(0)}\gg1$, yet the final grain size is smaller compared to classical parabolic coarsening - the coarsening is suppressed. The effect is exaggerated at smaller initial grain sizes (inset $B$ in Fig. \[fig:subsuper-linearGGKinetics\]); coarsening kinetics is increasingly linear ($\bar{r}_{f(0)} \le 1$). For super-linear IM rates ($\xi = 0.5$), the positive deviation from parabolic coarsening increases logarithmically and is much slower; the increase is substantial only for decades increase in grain sizes. The deviation is enhanced for $\bar{r}_i<1$, as IM rates are super-linear for larger range of grain sizes. This can be seen in inset $B$ in Fig. \[fig:subsuper-linearGGKinetics\], for $\bar{r}_i=0.1$.
Our analysis shows that a fundamental understanding of overdriven interface motion is critical for predicting coarsening kinetics in nanocrystalline interfacial microstructures. Recent molecular dynamics (MD) simulations of flat bicrystals in pure Al have been performed at small and large driving forces, offering a basis for understanding the coarsening kinetics of grain boundary microstructures. The mobility of a flat $\theta=38.2^\circ$ $<$$111$$>$ tilt misorientation grain boundary has been extracted in the zero driving force limit, and also under the influence of a bulk body force. Both studies were performed using an embedded-atom-method (EAM) framework for the inter-atomic potentials, justifying the comparison. The former is based on the random walk of the mean grain boundary position due to the uncorrelated thermal noise in the system [@gbm:TrauttUpmanyuKarma:2006] and was extracted at a temperature $T=750^\circ$K, the latter on a synthetic driving force due to an orientation dependent bulk energy term at $T=800^\circ$K [@gbm:JanssensHolm:2006]. The mobility of the synthetically driven grain boundary was extracted using a driving force $0.025$eV/atom. Using a conservative estimate of the activation volume, $\omega_m=\Omega_{Al}$, the driving force is still well past our critical value, $\beta p\omega_m=0.4$. Furthermore, the driving force for this boundary is of the order of the activation energy for migration of this boundary ($\sim0.02$eV), extracted in the zero driving force limit \[ZTT and MU, to be published\].
Comparison of the absolute mobilities grain boundaries extracted using these two techniques reveals that zero driving force limit mobility is faster by almost an order of magnitude ($4.4\times10^{-7}$ vs $6.0\times10^{-8}$ m$^4$J$^{-1}$s$^{-1}$)\]), even though it was extracted at a slightly lower temperature. While systematic studies are necessary to understand the rather large difference (changes in energy of the activated state or large activation volume for migration), the decrease in mobility and therefore the IM rates due to non-linearities suggests that coarsening of nanocrystalline is intrinsically suppressed, also confirmed in experiments. This intrinsic effect must be factored in before ascribing the slow coarsening rates of these microstructures to extrinsic effects on interface motion. Meso-scale grain growth simulations are necessary to understand the combined effect.
MU and ADR acknowledge support from DOE-sponsored Computational Materials Science Network (CMSN) on “[*Dynamics and Cohesion of Materials Interfaces and Confined Phases Under Stress*]{}”, and Office of Naval Research, Award N00014-06-1-0207 titled “[*Particle Strengthened Interfaces*]{}". ADR also acknowledges support under the MRSEC program of the National Science Foundation under Award Number DMR-0079996.
[17]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ** (, ).
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, ****, ().
, ** (, ).
, ** (, ).
, , , ****, ().
, , , , , , ****, ().
[^1]: Change in the energy of the activated state can also be absorbed in a similar form. If this increase is $\zeta p\omega_m$, the pre-factor $A$ for the IM rate is scaled by $e^{-\zeta \beta p\omega_m}$.
|
ICRR-report-605-2011-22\
IPMU11-0016\
[**Masahiro Ibe**]{}$^{(a,b)}$, [**Shigeki Matsumoto**]{}$^{(b)}$, and [**Tsutomu T. Yanagida**]{}$^{(b)}$
Introduction
============
The pure gravity mediation model investigated in Ref.[@Ibe:2011aa] is a surprisingly simple model of the supersymmetric Standard Model (SSM). There, the scalar bosons obtain supersymmetry (SUSY) breaking masses from a SUSY breaking sector via tree-level interactions in supergravity[@Nilles:1983ge]. The Higgs mixing mass parameters, $\mu$-term and $B$-term, are also generated via tree-level interactions of supergravity[@Inoue:1991rk]. Due to the tree-level mediation, the scalar boson masses and the Higgs mixing mass parameters are expected to be of the order of the gravitino mass, $m_{3/2}$. The gaugino masses are, on the other hand, generated at the one-loop level[@Giudice:1998xp; @Randall:1998uk; @hep-ph/9205227]. Thus, the pure gravity mediation model predicts a hierarchical spectrum. The greatest benefit of the pure gravity mediation is that the model requires no additional fields to realize the above spectrum. Therefore, the pure gravity mediation model is the bare-bones model of the supersymmetric Standard Model.
The pure gravity mediation model is particularly successful when the gravitino mass is in the range of $m_{3/2} =10$–$100$TeV. The first advantage is the alleviation of the cosmological gravitino problem[@Pagels:1981ke; @kkm]. Especially, the model does not suffer from the gravitino problem even for a very high reheating temperature after inflation, $T_R \gtrsim \sqrt{3}\times 10^9$GeV, which is essential for the successful thermal leptogenesis[@leptogenesis]. The second advantage is that the model has a good candidate for dark matter. For the above gravitino mass range, the lightest superparticle (LSP) which is neutral wino in the pure gravity mediation obtains a mass in hundreds GeV to TeV range. The neutral wino in this mass range is a good candidate of weakly interacting particle dark matter[@Gherghetta:1999sw; @hep-ph/9906527]. Moreover, as emphasized in Ref.[@Ibe:2004tg; @Ibe:2011aa], the relic density of the neutral wino can be consistent with the observed value when we assume the thermal leptogenesis. Therefore, the pure gravity mediation model goes quite well with the thermal leptogenesis. Another but an important advantage in cosmology is that the model does not suffer from the cosmological Polonyi problem[@Polonyi] since no singlet SUSY breaking fields are required in the model.[^1] In addition to those advantages in cosmology, the problems of flavor-changing neutral currents and CP violation in the SSM are highly ameliorated thanks to the large masses for squarks and sleptons. The unification of the gauge coupling constants at the very high energy scale also provides a strong motivation to the model.[^2]
In Ref.[@Ibe:2011aa], two of the authors (M.I. and T.T.Y) discussed the lightest Higgs boson mass of the minimal SSM (MSSM) based on the pure gravity mediation model. There, we showed that the lightest Higgs boson mass is required to be below about $128$GeV if we assume the thermal leptogenesis. This requirement has been shown to be consistent with the most recent experimental constraints on the Higgs boson mass, $m_h>115.5$GeV and $m_h <127$GeV at 95%C.L. reported by ATLAS[@ATLAS] and CMS collaborations[@CMS]. Furthermore, as shown in Ref.[@Ibe:2011aa], the pure gravity mediation model can easily explain the rather heavy Higgs boson mass around $125$GeV which is tantalizingly hinted by ATLAS and CMS collaborations.
In this letter, we discuss phenomenological, cosmological and astrophysical aspects of the pure gravity mediation model. In particular, in this paper, we concentrate ourselves on the the parameter space of the model which is consistent with the thermal leptogenesis. As we will show, such a parameter space can be fully tested by the observation of the cosmic rays, especially by the observation of the anti-proton flux in the foreseeable future. We also discuss the strategies of the discoveries and the measurements of the gauginos at the Large Hadron Collider (LHC) experiments. There, the distinctive gaugino mass spectrum in the pure gravity mediation model plays important roles.
The organization of the paper is as follows. In section\[sec:pSUGRA\], we review the model with pure gravity mediation. There, we discuss the details of the gaugino spectrum. In section\[sec:signals\], we discuss the phenomenological, cosmological and astrophysical aspects on the model. The final section is devoted to our conclusions.
Pure gravity mediation model {#sec:pSUGRA}
============================
Mass spectrum
-------------
In the pure gravity mediation model, the only new ingredient other than the MSSM fields is a (dynamical) SUSY breaking sector. There, the scalar bosons obtain the soft SUSY breaking squared masses mediated by tree-level interactions in supergravity. With a generic Kähler potential, all the soft squared masses of the scalar bosons are expected to be of the order of the gravitino mass[@Nilles:1983ge]. The soft SUSY breaking scalar trilinear coupling, the $A$-terms, are, on the other hand, expected to be suppressed in supergravity at the tree-level.
In the pure gravity mediation model, the Higgs mixing $\mu$ and $B$ parameters can be also generated via tree-level interactions in supergravity. In fact, if the Higgs doublets are not charged under any special symmetries, we expect the following Kähler potential, $$\begin{aligned}
K \ni c H_{u}H_{d} +
\frac{c'}{M_{PL}^{2}} X^{\dagger} X H_{u}H_{d} + h.c..\end{aligned}$$ Here, $X$ denotes a chiral SUSY breaking field in a (dynamical) SUSY breaking sector, $M_{PL}$ is the reduced Planck scale, and $c$ and $c'$ are coefficients of ${ O}(1)$. Through the above Kähler potential, the $\mu$- and the $B$-parameters[@Inoue:1991rk; @Ibe:2006de] $$\begin{aligned}
\label{eq:Muterm}
\mu_H &=& c m_{3/2},\\
\label{eq:Bterm}
B \mu_H &=& c m_{3/2}^{2} + c'\frac{|F_X|^2}{M_{PL}^{2}},\end{aligned}$$ where $F_X$ is the vacuum expectation value of the $F$-component of $X$. Therefore, the $\mu$- and $B$ Higgs mixing parameters are also expected to be of ${ O}(m_{3/2})$.[^3]
For the gaugino masses, on the other hand, tree-level contributions in the supergravity are extremely suppressed since we have no SUSY breaking fields which are singlet under any symmetries. At the one-loop level, however, the gaugino masses are generated without having singlet SUSY breaking fields, which is called the anomaly mediated contributions[@Giudice:1998xp; @Randall:1998uk]. Besides, the gauginos also obtain contributions from the heavy Higgsino threshold effects at the one-loop level. Putting these one-loop contributions together, the gaugino masses at the energy scale of the scalar boson masses, $M_{\rm SUSY} = O(m_{3/2})$, are given by[@Giudice:1998xp; @Gherghetta:1999sw] $$\begin{aligned}
M_{1} &=&
\frac{33}{5} \frac{g_{1}^{2}}{16 \pi^{2}}\left(m_{3/2}+ \frac{1}{11} L\right)\ ,
\label{eq:M1} \\
M_{2} &=&
\frac{g_{2}^{2}}{16 \pi^{2}} \left(m_{3/2} + L\right)\ ,
\label{eq:M2} \\
M_{3} &=& -3 \frac{g_3^2}{16\pi^2} m_{3/2}\ .
\label{eq:M3}\end{aligned}$$ Here, the subscripts $M_a$, $(a=1,2,3)$ correspond to the gauge groups of the Standard Model $U(1)_Y$, $SU(2)_L$ and $SU(3)$, respectively. In the above expressions, the terms proportional to $m_{3/2}$ denote the anomaly mediated contributions and the terms proportional to $L$ denote the Higgsino threshold contributions. The parameter $L$ is given by $$\begin{aligned}
L \equiv \mu_H \sin2\beta
\frac{m_{A}^{2}}{|\mu_H|^{2}-m_{A}^{2}} \ln \frac{|\mu_H|^{2}}{m_{A}^{2}}
\label{eq:L}\ ,\end{aligned}$$ where $m_A$ denotes the mass of the heavy Higgs bosons, and $\tan \beta$ is the ratio of the vacuum expectation values of the up-type Higgs boson $H_u$ and the down-type Higgs boson $H_d$. As we will see in the next subsection, the size of $L$ is expected to be of the order of the gravitino mass in the pure gravity mediation model[@Ibe:2011aa]. Therefore, the wino mass obtains comparable contributions from the anomaly mediated effects and the Higgsino threshold effects. This facts have a great impacts on the testability of the pure gravity mediation model at the LHC experiments.
Before closing this section, we should emphasize the difference of the pure gravity mediation model from the the Split Supersymmetry[@hep-th/0405159; @hep-ph/0406088; @hep-ph/0409232]. In the first place, the Split Supersymmetry mainly considers a scalar mass scales much higher than that in the pure gravity mediation model, i.e. $M_{\rm SUSY} \gg 10^{4-6}$GeV. Thus, the anomaly-mediated gaugino masses should be suppressed in the Split Supersymmetry, while we rely on the anomaly-mediated gaugino masses in the pure gravity mediation model.[^4] Thus, the pure gravity mediation model is more close to the PeV-scale Supersymmetry[@hep-ph/0411041] and the Spread Supersymmetry[@arXiv:1111.4519]. Another important and more practical difference is the size of $\mu$-term. In the Split Supersymmetry, it is assumed that the higgsinos are also in the TeV range. Thus, the absence of the Higgsino in the TeV range will be a crucial observation to distinguish the pure gravity mediation model from the Split Supersymmetry. Furthermore, as we will see below, such a large $\mu$-term leads to a peculiar gaugino spectrum in the pure gravity mediation model. Thus, we can also distinguish these models by carefully examining the gaugino mass spectrum.
Details on gaugino masses
-------------------------
![*The anomaly mediated contributions to the gaugino masses (denoted by AMSB). The each line corresponds to the heavy scalar threshold scale $M_{\rm SUSY} = 10,100$ and $1000$TeV from bottom to up. In the figure, we have taken $\mu_H = O(m_{3/2})$ and $\tan\beta = O(1)$, although they are not sensitive to those parameters.* []{data-label="fig:gaugino"}](bino.pdf){width="\linewidth"}
![*The anomaly mediated contributions to the gaugino masses (denoted by AMSB). The each line corresponds to the heavy scalar threshold scale $M_{\rm SUSY} = 10,100$ and $1000$TeV from bottom to up. In the figure, we have taken $\mu_H = O(m_{3/2})$ and $\tan\beta = O(1)$, although they are not sensitive to those parameters.* []{data-label="fig:gaugino"}](wino.pdf){width="1\linewidth"}
![*The anomaly mediated contributions to the gaugino masses (denoted by AMSB). The each line corresponds to the heavy scalar threshold scale $M_{\rm SUSY} = 10,100$ and $1000$TeV from bottom to up. In the figure, we have taken $\mu_H = O(m_{3/2})$ and $\tan\beta = O(1)$, although they are not sensitive to those parameters.* []{data-label="fig:gaugino"}](gluino.pdf){width=".93\linewidth"}
As discussed above, the pure gravity mediation model predicts that the sfermions, Higgsinos and the heavier Higgs bosons in the MSSM have masses of the order of the gravitino mass, $m_{3/2}=10$–$100$TeV. Therefore, the only accessible particles at the collider experiments in the foreseeable future are the gauginos. In this subsection, we give detailed analysis on the gaugino mass spectrum in the pure gravity mediation model.
First, let us consider the anomaly mediated contributions to the gaugino masses. As we see from Eqs.(\[eq:M1\])-(\[eq:M3\]), the wino is the lightest gauginos for $L = 0$. This feature is related to the fact that the $SU(2)_L$ gauge coupling constant is the least scale dependent out of the three gauge coupling constants. In Fig.\[fig:gaugino\], we show the anomaly mediated gaugino masses as a function of the gravitino mass. The figure shows that the gaugino masses are roughly given by, $$\begin{aligned}
m_{\rm bino} &\simeq& 10^{-2}m_{3/2}\ ,\\
m_{\rm wino} &\simeq& 3\times 10^{-3}m_{3/2}\ ,\\
m_{\rm gluino} &\simeq& (2-3)\times 10^{-2} \, m_{3/2} \ .\end{aligned}$$ with small dependences on the heavy scalar threshold scale, $M_{\rm SUSY}$. Thus, for the wino mass $m_{\rm wino} = 300\,$GeV, for example, the gluino mass is heavier than $2$TeV if the anomaly mediated contributions dominate the gaugino masses.
Now, let us estimate the typical size of $L$ in the pure gravity mediation model which parametrize the Higgsino threshold contributions to the gaugino masses. Let us remember that we require one of the linear combinations of the two Higgs bosons, $ h = \sin\beta H_{u} - \cos\beta H_{d}^{*}$ remains very light for successful electroweak symmetry breaking. In terms of the Higgs mass parameters, the above fine-tuning condition requires, $$\begin{aligned}
\label{eq:tuning}
(|\mu_H|^2+m_{H_u}^2) (|\mu_H|^2+m_{H_d}^2) - (B\mu_H)^2 \simeq 0\ ,\end{aligned}$$ while the Higgs mixing angle is related to the Higgs mass parameters by, $$\begin{aligned}
\label{eq:angle}
\sin2\beta = \frac{2 B\mu_H}{m_A^2} \ , \quad (m_A^2 = m_{H_u}^2 + m_{H_d}^2 + 2 |\mu_H|^2)\ .\end{aligned}$$ Here, $m_{H_{u,d}}^2$ denote the soft SUSY breaking squared masses of the two Higgs doublets, $H_u$ and $H_d$. These conditions show that the mixing angle $\beta$ is expected to be of ${O}(1)$, since all the mass parameters of the Higgs sector (except for a fine-tuning condition) are of the order of the gravitino mass in the pure gravity mediation model.[^5]
![*The typical values of $|L/m_{3/2}|$ for $\tan \beta = 1,3, 10$ and $30$. The unit of the vertical axis is arbitrary. We have distributed $\mu_H$ and $B$ from $m_{3/2}/3$ to $3m_{3/2}$ and required $|m_{H_{u,d}}^2/m_{3/2}^2| < 5 $ which are determined by the electroweak symmetry breaking conditions in Eqs.(\[eq:tuning\]) and (\[eq:angle\]). The ratios of the areas of each histogram roughly represent the relative consistency of the value of $\tan\beta$ in the pure gravity mediation.* []{data-label="fig:L"}](HistL.pdf){width=".5\linewidth"}
By putting the typical values of $\tan\beta = O(1)$ and the Higgs mass parameters of the gravitino mass scale together into the definition of $L$ in Eq.(\[eq:L\]), we find that the typical value of $L$ is also of the gravitino mass scale. To see this clearly, we show the typical size of $L$ for $\tan \beta = 1, 3, 10$ and $30$ (Fig.\[fig:L\]). Here, we have assumed that $\mu_H$ and $B$ range from $m_{3/2}/3$ to $3m_{3/2}$, respectively.[^6] The figure also shows that $|L/m_{3/2}| \simeq 0.5-2$ for $\tan\beta = O(1)$. Therefore, in the pure gravity mediation model, we expect $L/m_{3/2} = O(1)$, which leads to comparable contribution to the wino mass from the Higgsino threshold effects (see Eq.(\[eq:M2\])).
![*(Left) The ratios of the wino and bino masses with and without the Higgsino contributions for given values of $L$. We have used a phase convention that $m_{3/2}$ is real and positive. The red lines show the $|L|$ dependences for given phases of $L$, while the blue lines show the $\arg[L]$ dependences for given values of $|L|$. (The dashed blue lines show the values of $|L|$ in between the ones for the two solid lines.). In the gray shaded region for $|L/m_{3/2}|\gtrsim 3$, the wino is no more the LSP. (Right) The $L$ dependences of the gaugino masses for $m_{3/2}=M_{\rm SUSY}= 50$TeV for $L>0\,(\arg[L] = 0)$ and $L<0\,(\arg[L] = \pi)$.* []{data-label="fig:L-dependence"}](M1M2.pdf){width=".8\linewidth"}
![*(Left) The ratios of the wino and bino masses with and without the Higgsino contributions for given values of $L$. We have used a phase convention that $m_{3/2}$ is real and positive. The red lines show the $|L|$ dependences for given phases of $L$, while the blue lines show the $\arg[L]$ dependences for given values of $|L|$. (The dashed blue lines show the values of $|L|$ in between the ones for the two solid lines.). In the gray shaded region for $|L/m_{3/2}|\gtrsim 3$, the wino is no more the LSP. (Right) The $L$ dependences of the gaugino masses for $m_{3/2}=M_{\rm SUSY}= 50$TeV for $L>0\,(\arg[L] = 0)$ and $L<0\,(\arg[L] = \pi)$.* []{data-label="fig:L-dependence"}](gaugino.pdf){width=".8\linewidth"}
In Fig.\[fig:L-dependence\], we show the ratio of the wino and bino masses with and without the Higgsino contributions for given values of $L$ (left panel). The figure shows that the wino mass can be about twice as heavy as the anomaly mediated contribution for $|L/m_{3/2}|\simeq 1$ which is expected in the pure gravity mediation model. It should be noted that the wino becomes no more the LSP where the Higgsino threshold contribution dominates. In such cases, the relic density of dark matter easily exceed the observed one due to the highly suppressed annihilation cross section of the bino for $O(100)\,$GeV. Fortunately, however, the figure shows that the bino becomes LSP only for $|L/m_{3/2}|>3$ which is less likely in the pure gravity mediation model. Therefore, in the pure gravity mediation model, the LSP is mostly wino-like, although the wino mass obtains a comparable contribution from the Higgsino threshold effects. [^7]
![*The contour plot of the wino mass for $L>0$ and $L<0$. Here, we have taken $M_{\rm SUSY} = m_{3/2}$ (blue lines). (The dashed lines corresponds to $m_{\rm wino} = 150\,{\rm GeV},250\,{\rm GeV},\cdots$.) The blue shaded region denotes the experimental constraint, $m_{\rm wino}\geq 88$GeV for the degenerated neutralino-chargino obtained by LEPII experiment[@hep-ex/0203020]. The orange shaded region denotes the experimental constrain on the gauginos, $m_{\rm gluino}\gtrsim 750$GeV for $m_{\rm LSP}<200$GeV reported by the ATLAS collaboration[@ATLAS2].* []{data-label="fig:wino"}](WinoContour.pdf){width=".4\linewidth"}
In Fig.\[fig:wino\], we show the contour plot of the wino mass. In the figure, the blue shaded region shows the current experimental constraints on the wino mass $m_{\rm wino}\geq 88$GeV for the degenerated neutralino-chargino obtained gz by LEPII experiment[@hep-ex/0203020]. The orange shaded region shows the experimental constrain on the gauginos, $m_{\rm gluino}\gtrsim 750$GeV for $m_{\rm LSP}\lesssim 200$GeV reported by the ATLAS collaboration[@ATLAS2]. By remembering that $L/m_{3/2}\gtrsim 2.5$ is less likely in the pure gravity mediation, the figure shows that the gluino mass bound requires $m_{3/2}\gtrsim 30$TeV.[^8]
![*The contour plot of the lightest Higgs boson mass. (The dashed contours are for the intermediate values between the two solid contours.) Here, we have fixed $m_{3/2}=50$TeV and taken $\mu_H=M_{\rm SUSY}$. The gray shaded regions correspond to $m_h<115.5$GeV and $m_h >127$GeV which are excluded by the ATLAS and CMS collaborations at 95%C.L. for the central value of the top quark mass, $m_{\rm top} = 173.2\pm 0.9$GeV. The light gray shaded region denotes the Higgs mass constraints including the $1\sigma$ error of the top quark mass. The orange band shows the Higgs boson mass $124\,GeV < m_{h}< 126\,GeV$ hinted by the ATLAS and CMS collaborations for the central value of the top quark mass. The light orange band is the one including the $1\sigma$ error of the top quark mass.* []{data-label="fig:Higgs"}](HiggsContour.pdf){width=".4\linewidth"}
Finally, we discuss the lightest Higgs boson mass in the pure gravity mediation model. In the pure gravity mediation model, the lightest Higgs boson mass is expected to be heavier than the conventional MSSM models due to the heavy scalar bosons[@TU-363]. In Fig.\[fig:Higgs\], we show the Higgs boson mass obtained by solving the full one-loop renormalization-group equations of the Higgs quartic coupling and other coupling constants given in Ref.[@hep-ph/0406088] with the boundary condition, $$\begin{aligned}
\label{eq:SUSY}
\lambda = \frac{1}{4} \left(\frac{3}{5}g_1^2+ g_2^2 \right) \cos^22\beta\ ,\end{aligned}$$ at the heavy scalar scale. The threshold corrections at the heavy scalar scale are also taken into account. We also take into account the weak scale threshold corrections to those parameters in accordance with Ref.[@arXiv:0705.1496; @arXiv:1108.6077]. It should be noted that the predicted Higgs boson mass is slightly lighter than the one in Ref.[@arXiv:1108.6077] for a given $(M_{\rm SUSY},\tan\beta)$, since the Higgsino contributions decouple at the very high scale in the pure gravitino mediation model (see Ref.[@Ibe:2011aa]).
In the figure, the gray shaded regions correspond to $m_h<115.5$GeV and $m_h >127$GeV which are excluded by the ATLAS and CMS collaborations[@ATLAS; @CMS] at 95%C.L. for the central value of the top quark mass, $m_{\rm top} = 173.2\pm 0.9$GeV[@arXiv:1107.5255]. The light gray shaded region denotes the Higgs mass constraints including the $1\sigma$ error of the top quark mass. The orange band shows the Higgs boson mass $124\,{\rm GeV}<m_{h}< 126\,{\rm GeV}$ hinted by the ATLAS and CMS collaborations[@ATLAS; @CMS] for the central value of the top quark mass. The light orange band is the one including the $1\sigma$ error of the top quark mass.
By combined with $m_{3/2}\gtrsim 30$TeV which is required from the experimental gluino mass bound, the hinted Higgs boson in the Fig.\[fig:Higgs\] ($124\,{\rm GeV}<m_{h}<126\,{\rm GeV}$) constrains the value of $\tan\beta$ to $\tan\beta \lesssim 7$. This shows that the pure gravity mediation works quite consistently since $\tan\beta = O(1)$ is expected in the pure gravity mediation model.
Signals of the pure gravity mediation model {#sec:signals}
===========================================
In this section, we consider several signals predicted in the pure gravity mediation model. Before going to discuss those, we summarize current cosmological constraints on the model. After that, we consider signals related to dark matter detections, where current astrophysical constraints on the dark matter mass and near-future prospects to detect the dark matter are discussed. We finally consider collider signals with particularly focusing on the pair production of the gluino at the LHC experiments with the center of mass energy of 14TeV.
Cosmological constraints
------------------------
We first consider the thermal history of the dark matter which is the neutral wino in the pure gravity mediation model. Its $SU(2)_L$ partner, the charged wino, is slightly heavier than the neutral one by $155$–$170\,$MeV because of contributions from one-loop gauge boson diagrams[@Cheng:1998hc]. The charged wino decays into a neutral wino and a pion with the lifetime of ${\cal O}(10^{-10})$sec. It is known that the thermal relic density of the wino, which is obtained by considering not only self-annihilation processes of the neutral wino but also co-annihilation processes between the neutral and/or the charged winos, can be consistent with the observed dark matter density when its mass is $m_{\rm wino} \simeq 2.7$TeV. This is because the annihilation cross section of the wino is highly boosted by the non-perturbative effect called Sommerfeld-enhancement[@Hisano:2006nn].
On the other hand, the wino dark matter is also produced non-thermally through the late time decay of the gravitino, which also contributes to the relic abundance of the dark matter. If the contribution is significant, the neutral wino consistent with the observed dark matter density is much lighter than $2.7\,$TeV[@Gherghetta:1999sw; @hep-ph/9906527]. In particular, in order to have an appropriate reheating temperature for the successful thermal leptogenesis, there is an upper bound on the wino mass; $m_{\rm wino} \lesssim 1$TeV[@Ibe:2011aa]. This fact means that the most of the dark matter observed today is not from thermal relics but produced non-thermally by the late time decay of the gravitino.
Since the neutral wino has a large annihilation cross section into a $W$-boson pair, which is of the order of $10^{-24}$–$10^{-25}$ cm$^3$/s when $m_{\rm wino} \lesssim 1$TeV, it may affect several phenomena in the early universe [@Moroi:2011ab]. For instance, the annihilation may affect abundances of light elements, and, in fact, observations of the elements put a bound on the mass of the neutral wino as $m_{\rm wino} \gtrsim 200$GeV in order not to destroy the elements during Big-Bang Nucleosynthesis (BBN)[@DM_BBN]. The annihilation also affects the recombination history of the universe. If the annihilation is significantly large, it modifies the spectrum of cosmic microwave background[@DM_CMB]. This fact leads to the constraint as $m_{\rm wino} \gtrsim 200$GeV, which is comparable to that from BBN.
Dark matter detections
----------------------
Since the $\mu$-parameter is of the order of 10–100TeV in the pure gravity mediation model, the effect of the mixing between wino and higgsino components on the lightest supersymmetric particle (dark matter) is negligibly small. The scattering cross section between the dark matter and a nucleon is then estimated to be $10^{-47}$ cm$^2$[@Hisano:2010fy], which seems to be very challenging to discover the dark matter in on-going direct detection experiments. This is a sharp contrast to the cases of Split Supersymmetry model and conventional anomaly mediation models. Since the $\mu$-parameter does not have to be huge in these models, the tree-level diagram that the higgs boson is exchanged in the $t$-channel contributes to the scattering cross section significantly, which enables us to detect the dark matter in near future[@Moroi:2011ab]. Direct detection experiments of dark matter can be therefore used as a test of the pure gravity mediation model.
![*Constraints and future prospects of indirect detection experiments of dark matter. Theoretical prediction of the neutral wino dark matter is also shown.*[]{data-label="fig: DM"}](DM.pdf){width="0.5\linewidth"}
On the contrary to the direct detection of dark matter, we can expect rich signals at indirect detection experiments, because the dark matter is almost purely wino in the pure gravity mediation model and its annihilation cross section is boosted by the Sommerfeld effect[@Sommerfeld]. Among several on-going experiments, the most stringent constraint on the dark matter is obtained by the Fermi-LAT experiment observing gamma-rays from milky way satellites[@Ackermann:2011wa]. This constraint is depicted in Fig.\[fig: DM\] as a solid (green) line. No astrophysical boost factor is assumed here. Theoretical prediction of the neutral wino is also shown in the figure, which is obtained by calculating its annihilation cross section involving the Sommerfeld effect at one-loop level[@Hryczuk:2011vi]. Notice, however, that there may be some uncertainties on the constraint, since the constraint is based on several assumptions such as the use of fixed dark matter profile. According to Ref.[@Charbonnier:2011ft] in which those uncertainties (involving dark matter profiles) on the gamma-ray experiment are discussed, we also show the region (green-shaded one) above the constraint in order to take the uncertainties into account. It can be seen that the neutral wino should be, at least, heavier than $300$GeV.
Another interesting indirect detection is the PAMELA experiment observing the cosmic-ray $\bar{p}$ (anti-proton) flux[@Adriani:2010rc]. Current constraint on the dark matter from the experiment is also shown in Fig.\[fig: DM\] as a blue-shaded region. Since the $\bar{p}$ flux depends on how $\bar{p}$ propagates under the complicated magnetic field of our galaxy and which dark matter profiles we adopt[@Evoli:2011id], the constraint has large uncertainties as can be seen in the figure. The mass of the dark matter is, however, constrained to be $m_{\rm wino} \gtrsim 230$GeV in spite of the uncertainties. On the other hand, the observation of the cosmic-ray $\bar{p}$ flux in near future is very hopeful. This is because the AMS-02 experiment, which has already been started[@AMS-02], has better sensitivity than the PAMELA experiment and it is also expected that astrophysical uncertainties related to the $\bar{p}$ propagation are reduced. The future sensitivity to detect the dark matter in this experiment is also depicted in the figure as a red-shaded region with assuming an appropriate propagation model[@Evoli:2011id]. It can be seen that the sensitivity is much below the prediction of the dark matter. It is also worth noting that the whole mass range of the dark matter consistent with the thermal leptogenesis will be fully tested by the future observation of the cosmic-ray $\bar{p}$ flux, because the annihilation cross section of the dark matter is not suppressed because of the Sommerfeld effect. It may be even possible to determine $m_{\rm wino}$ by observing the $\bar{p}$ spectrum.
Finally, we comment on other indirect detections of dark matter. It is well known that there is an anomaly at the cosmic-ray $e^+$ flux[@Adriani:2008zr]. Since it is difficult to account for the anomaly by the neutral wino dark matter with the mass of 300–1000 GeV [@PAMELA; @WINO], it should be explained by some astrophysical activities. The observation of the $e^+$ flux is therefore not better than that of the $\bar{p}$ flux to test the pure gravity mediation model. The observation of the $\nu$ flux from the galactic center may give an good opportunities to test the neutral wino dark matter[@Moroi:2011ab], though the signal strength depends on the dark matter profile at the center. On the other hand, the observation of the $\nu$ flux from the sun seems to be challenging, because the flux is proportional to the spin-dependent scattering cross section of the dark matter and it is estimated to be as small as $10^{-48}$cm$^2$ in the pure gravity mediation model[@Hisano:2010fy].
Collider signals
----------------
![*(Left panel) Cross section of the gluino pair production at the LHC experiment with the center of mass energy of 14TeV. (Right panel) Gluino and wino masses within the parameter region of $m_{\rm gluino} \lesssim 3$TeV. Shaded regions are not favored because of the gluino LSP ($m_{\rm gluino} > m_{\rm wino}$), too large $L$ ($L/m_{3/2} > 2$), and dark matter constraints ($m_{\rm wino} < 300$GeV). Current bound of the LHC experiment (7TeV) is also shown.*[]{data-label="fig: LHC"}](CS.pdf "fig:"){width="0.45\linewidth"} ![*(Left panel) Cross section of the gluino pair production at the LHC experiment with the center of mass energy of 14TeV. (Right panel) Gluino and wino masses within the parameter region of $m_{\rm gluino} \lesssim 3$TeV. Shaded regions are not favored because of the gluino LSP ($m_{\rm gluino} > m_{\rm wino}$), too large $L$ ($L/m_{3/2} > 2$), and dark matter constraints ($m_{\rm wino} < 300$GeV). Current bound of the LHC experiment (7TeV) is also shown.*[]{data-label="fig: LHC"}](gluino_wino.pdf "fig:"){width="0.435\linewidth"}
In the pure gravity mediation model, the ratio between gluino and wino masses can be smaller than that of the conventional anomaly mediation model. The gluino may therefore be produced at the LHC experiment even if the wino mass is constrained to be $m_{\rm wino} \gtrsim 300$GeV. On the other hand, all the sfermions as well as the higgsinos are of the order of $10$–$100$TeV in the model and they are never produced at the LHC. As a result, the dominant collider signal of the model is the pair production of the gluinos, whose production cross section is shown in Fig.\[fig: LHC\] (left panel). Once the gluino is produced, it eventually decays into a neutral wino by emitting Standard Model particles. It is known that, when the sfermions are much heavier than the gluino, the radiative decay of the gluino into a gluon and a neutralino ($\tilde{g} \to g\tilde{\chi}^0$) can have a sizable branching fraction[@Radiative; @gluino; @decay]. In the pure gravity mediation model, however, the $\mu$-parameter is also as large as the sfermion masses and the branching fraction is much suppressed. The non-observation of the radiative decay therefore enables us to distinguish the pure gravity mediation model from other models predicting heavy sfermions without the large $\mu$-parameter.
Gluinos in the pure gravity mediation model therefore decay into two quarks with a neutralino/chargino ($\tilde{g} \to q\bar{q}^\prime + \tilde{\chi}^0/\tilde{\chi}^\pm_i$). The chargino, which is nothing but the charged wino, decays into a neutral wino (dark matter) by emitting a soft pion. On the other hand, when the neutralino is the bino, it decays through several modes; a charged wino $+$ a $W$-boson ($\tilde{B} \to \tilde{W}^\pm W^\mp$), a neutral wino $+$ a higgs boson ($\tilde{B} \to \tilde{W}^0 h$), or a charged/neutral wino $+$ two leptons ($\tilde{B} \rightarrow \tilde{W} l \bar{l}^\prime$), whose branching fractions depend highly on model parameters. In Fig.\[fig: LHC\] (right panel), we show the range of gluino and wino masses within the parameter region of our interest for the LHC experiment. It is also worth noting that, as shown in the previous section, the bino mass is roughly given by $m_{\rm bino} \simeq m_{\rm gluino}/3$ in the most parameter region. Thus, the mass degeneracy between gluino and neutralino/chargino is not severe, which is very attractive from the viewpoint of discovering the signal.
The most efficient mode to discover the signal of the pure gravity mediation model is the pair production of the gluinos followed by the decay $\tilde{g} \to q\bar{q}^\prime \tilde{\chi}$ with $q (q^\prime)$ being a quark except the top quark and $\tilde{\chi}$ being a neutralino/chargino, so that the signal event is composed of four jets + missing energy. The branching fraction of the decay is about 73.4% when all squark masses are degenerated. In Fig.\[fig: LHC\] (right panel), the current bound on the $(m_{\rm gluino}, m_{\rm wino})$-plane, which is obtained by the LHC experiment with the center of mass energy of $7$TeV and 1.04fb$^{-1}$ data, is depicted with assuming that $m_{\rm wino} \sim m_{\rm bino}$ and 100% branching fraction of the decay $\tilde{g} \to q\bar{q}^\prime \tilde{\chi}$[@ATLAS-CONF]. It can be seen that the region constrained by the current LHC data has already been excluded by dark matter experiments. It has been also shown that the gluino mass up to 1.2TeV can be discovered at the LHC experiment with the center of mass energy of 14TeV when 10fb$^{-1}$ data is accumulated[@Asai:2007sw].
Once the signal of the pure gravity mediation model is discovered, the next important task will be the mass determinations of gauginos. When the gluino is about 1TeV and 100fb$^{-1}$ data is accumulated at the LHC experiment with the center of mass energy of 14TeV, the mass difference between gluino and wino can be determined with the accuracy of 5%, which is obtained by observing the endpoint of two-jets invariant mass distribution at “four jets + missing energy” events[@Asai:2007sw]. The mass difference will be determined more accurately with the use of a novel method recently proposed in Ref.[@ISR], where the endpoint of so-called $M_{T2}$ distribution[@Barr:2003rg] is shown to be stable against the contamination of initial state radiations. On the other hand, the gluino mass may be determined by observing the cross section of the gluino pair production if the acceptance of the LHC experiment for this mode is well understood. The wino mass is expected to be determined by observing the $M_{T2}$ endpoint, because the endpoint has a kink structure at the wino mass as a function of the test mass defining $M_{T2}$[@ISR]. It has been also shown that the wino mass is determined by using the charged track of $\tilde{W}^{\pm}$, because its decay length is estimated to be ${\cal O}(10)$cm[@Asai:2008sk]. It may be even possible to measure the lifetime of $\tilde{W}^{\pm}$ using this method. The mass difference between bino and wino is determined only when the branching fraction ${\rm Br}(\tilde{g} \to q\bar{q}\tilde{B}) \times {\rm Br}(\tilde{B} \rightarrow l\bar{l}\tilde{W}^0)$ is large enough[@Asai:2007sw].
Finally, we comment on collider signals of the pure gauge mediation model when the gluino is heavier than a few TeV and is not accessible at the current and near future LHC experiments. In such cases, we have to rely on the direct production (Drell-Yan process) of charged winos associated with a quark (gluon). Associated quark (gluon) is necessary as a trigger for recording data[@Ibe:2006de; @Moroi:2011ab]. Its cross section is rapidly decreased with increasing the wino mass. Since the mass of the wino is at most 1TeV for the successful leptogenesis, the high luminosity LHC experiment (HL-LHC)[@HL-LHC] may help us to discover the signal. On the other hand, if the multi-TeV linear colliders such as ILC [@ILC] or CLIC [@CLIC] are available, we can investigate the properties of neutral and charged wino in details. Since the analysis strategy for the mode $e^+e^- \to \tilde{W}^+\tilde{W}^- \to \tilde{W}^0\tilde{W}^0\pi^+\pi^-$ is very similar to that for the golden mode of dark matter detections, $e^+e^- \to \tilde{\chi}^+\tilde{\chi}^- \to \tilde{\chi}^0\tilde{\chi}^0W^+W^-$ with $\chi^0$ and $\chi^\pm$ being the dark matter and its charged partner[@Asano:2011aj], we can easily find the signal of the pure gravity mediation model if $\pi$-mesons are efficiently detected.
Conclusion {#sec:conclusion}
==========
The pure gravity mediation model is the bare bones model of the supersymmetric Standard Model. Despite its simpleness, the model is quite successful for $m_{3/2} = O(10-10^2)$TeV; the model has a good candidate of dark matter, the gauge coupling constants unify at the GUT scale very precisely, the Higgs boson mass around $125$GeV can be easily accounted. In this sense, the model is superior even to the Standard Model. The consistency with the thermal leptogenesis is also an significant support of the model.
In this paper, we discussed details of the gaugino mass spectrum in the pure gravity mediation model. There, we showed that the wino mass obtains comparable contributions both from the anomaly-mediation and the Higgsino threshold effects. As a result, the ratio between the wino LSP and the gluino masses can be as large as around one third, which enhances the detectability of the model at the LHC experiments. In fact, we showed that the gluino can be within the reach of the LHC experiments even for the wino mass which satisfies the cosmological and astrophysical constraints, $m_{\rm wino}\gtrsim 300$GeV. This is a sharp contrast to the cases of the anomaly mediated gaugino spectrum where the gluino mass is about eight to nigh times larger than the wino mass. Utilizing this property, we discussed the strategies of the discovery and the measurement of the model at the LHC experiments via the gluino production.
In this paper, we also discussed the prospects of the wino dark matter detection via the cosmic ray observations. As a result, we found that the wino dark matter scenario which is consistent with the thermal leptogenesis can be fully surveyed by observing the cosmic ray anti-proton flux at the AMS-02 experiment. Therefore, the most motivated parameter region of the pure gravity mediation model which is consistent with the thermal leptogenesis can be tested over the next ten years or so.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by Grant-in-Aid for Scientific research from the Ministry of Education, Science, Sports, and Culture (MEXT), Japan, No. 22244021 (S.M. and T.T.Y.) and No. 23740169 (S.M.), and also by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
[99]{}
M. Ibe and T. T. Yanagida, arXiv:1112.2462 \[hep-ph\].
For a review, H. P. Nilles, Phys. Rept. [**110**]{} (1984) 1. K. Inoue, M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. D [**45**]{}, 328 (1992).
G. F. Giudice, M. A. Luty, H. Murayama and R. Rattazzi, JHEP [**9812**]{}, 027 (1998). L. Randall and R. Sundrum, Nucl. Phys. B [**557**]{}, 79 (1999). M. Dine and D. MacIntire, Phys. Rev. D [**46**]{}, 2594 (1992) \[hep-ph/9205227\].
H. Pagels and J. R. Primack, Phys. Rev. Lett. [**48**]{}, 223 (1982); S. Weinberg, Phys. Rev. Lett. [**48**]{}, 1303 (1982); M. Y. .Khlopov and A. D. Linde, Phys. Lett. B [**138**]{}, 265 (1984). M. Kawasaki, K. Kohri and T. Moroi, Phys. Rev. D [**71**]{} (2005) 083502 \[arXiv:astro-ph/0408426\]; K. Jedamzik, Phys. Rev. D [**74**]{}, 103509 (2006) \[arXiv:hep-ph/0604251\]; M. Kawasaki, K. Kohri, T. Moroi and A. Yotsuyanagi, Phys. Rev. D [**78**]{}, 065011 (2008) \[arXiv:0804.3745 \[hep-ph\]\], and references therein.
M. Fukugita and T. Yanagida, Phys. Lett. [**B174**]{} (1986) 45; For reviews, W. Buchmuller, P. Di Bari and M. Plumacher, Annals Phys. [**315**]{}, 305 (2005) \[hep-ph/0401240\]; W. Buchmuller, R. D. Peccei and T. Yanagida, Ann. Rev. Nucl. Part. Sci. [**55**]{}, 311 (2005) \[arXiv:hep-ph/0502169\]; S. Davidson, E. Nardi and Y. Nir, Phys. Rept. [**466**]{}, 105 (2008) \[arXiv:0802.2962 \[hep-ph\]\].
T. Gherghetta, G. F. Giudice and J. D. Wells, Nucl. Phys. B [**559**]{}, 27 (1999) \[arXiv:hep-ph/9904378\]. T. Moroi and L. Randall, Nucl. Phys. B [**570**]{}, 455 (2000) \[hep-ph/9906527\]. M. Ibe, R. Kitano, H. Murayama and T. Yanagida, Phys. Rev. D [**70**]{}, 075012 (2004) \[arXiv:hep-ph/0403198\]; M. Ibe, R. Kitano and H. Murayama, Phys. Rev. D [**71**]{}, 075003 (2005) \[arXiv:hep-ph/0412200\].
G. D. Coughlan, W. Fischler, E. W. Kolb, S. Raby and G. G. Ross, Phys. Lett. B [**131**]{}, 59 (1983). M. Ibe, Y. Shinbara and T. T. Yanagida, Phys. Lett. B [**639**]{}, 534 (2006) \[hep-ph/0605252\]. K. Abe, T. Abe, H. Aihara, Y. Fukuda, Y. Hayato, K. Huang, A. K. Ichikawa and M. Ikeda [*et al.*]{}, arXiv:1109.3262 \[hep-ex\]. ATLAS report, ATLAS-CONF-2011-163. CMS Collaboration, arXiv:1202.1487 \[hep-ex\]. M. Ibe, T. Moroi and T. T. Yanagida, Phys. Lett. B [**644**]{}, 355 (2007) \[arXiv:hep-ph/0610277\]. G. F. Giudice and A. Masiero, Phys. Lett. B [**206**]{}, 480 (1988). N. Arkani-Hamed and S. Dimopoulos, JHEP [**0506**]{}, 073 (2005) \[hep-th/0405159\]. G. F. Giudice and A. Romanino, Nucl. Phys. B [**699**]{}, 65 (2004) \[Erratum-ibid. B [**706**]{}, 65 (2005)\] \[hep-ph/0406088\]. N. Arkani-Hamed, S. Dimopoulos, G. F. Giudice and A. Romanino, Nucl. Phys. B [**709**]{}, 3 (2005) \[hep-ph/0409232\]. K. -I. Izawa, T. Kugo and T. T. Yanagida, Prog. Theor. Phys. [**125**]{}, 261 (2011) \[arXiv:1008.4641 \[hep-ph\]\].
J. D. Wells, Phys. Rev. D [**71**]{}, 015013 (2005) \[hep-ph/0411041\]. L. J. Hall and Y. Nomura, arXiv:1111.4519 \[hep-ph\]. See e.g. Amar C. Vutha, Wesley C. Campbell, Yulia V. Gurevich, Nicholas R. Hutzler, Maxwell Parsons, David Patterson, Elizabeth Petrik, Benjamin Spaun, John M. Doyle, Gerald Gabrielse, David DeMille, arXiv:0908.2412\[physics\]. A. Heister [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**533**]{}, 223 (2002) \[hep-ex/0203020\]. ATLAS reprot, ATLAS-CONF-2011-155
Y. Okada, M. Yamaguchi and T. Yanagida, Phys. Lett. B [**262**]{}, 54 (1991).
N. Bernal, A. Djouadi and P. Slavich, JHEP [**0707**]{}, 016 (2007) \[arXiv:0705.1496 \[hep-ph\]\]. G. F. Giudice and A. Strumia, Nucl. Phys. B [**858**]{}, 63 (2012) \[arXiv:1108.6077 \[hep-ph\]\]. M. Lancaster \[Tevatron Electroweak Working Group and for the CDF and D0 Collaborations\], arXiv:1107.5255 \[hep-ex\].
H. C. Cheng, B. A. Dobrescu and K. T. Matchev, Nucl. Phys. B [**543**]{}, 47 (1999) \[arXiv:hep-ph/9811316\].
J. Hisano, S. Matsumoto, M. Nagai, O. Saito and M. Senami, Phys. Lett. B [**646**]{}, 34 (2007) \[arXiv:hep-ph/0610249\].
T. Moroi and K. Nakayama, arXiv:1112.3123 \[hep-ph\].
K. Jedamzik, Phys. Rev. D [**70**]{}, 083510 (2004) \[arXiv:astro-ph/0405583\]; J. Hisano, M. Kawasaki, K. Kohri and K. Nakayama, Phys. Rev. D [**79**]{}, 063514 (2009) \[Erratum-ibid. D [**80**]{}, 029907 (2009)\] \[arXiv:0810.1892 \[hep-ph\]\]; J. Hisano, M. Kawasaki, K. Kohri, T. Moroi and K. Nakayama, Phys. Rev. D [**79**]{}, 083522 (2009) \[arXiv:0901.3582 \[hep-ph\]\].
B. Ezhuthachan, S. Mukhi and C. Papageorgakis, JHEP [**0904**]{}, 101 (2009) \[arXiv:0903.0003 \[hep-th\]\]; T. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner, Phys. Rev. D [**80**]{}, 043526 (2009) \[arXiv:0906.1197 \[astro-ph.CO\]\]; T. Kanzaki, M. Kawasaki and K. Nakayama, Prog. Theor. Phys. [**123**]{}, 853 (2010) \[arXiv:0907.3985 \[astro-ph.CO\]\]; J. Hisano, M. Kawasaki, K. Kohri, T. Moroi, K. Nakayama and T. Sekiguchi, Phys. Rev. D [**83**]{}, 123511 (2011) \[arXiv:1102.4658 \[hep-ph\]\]; S. Galli, F. Iocco, G. Bertone and A. Melchiorri, Phys. Rev. D [**84**]{}, 027302 (2011) \[arXiv:1106.1528 \[astro-ph.CO\]\].
J. Hisano, K. Ishiwata and N. Nagata, Phys. Lett. B [**690**]{}, 311 (2010) \[arXiv:1004.4090 \[hep-ph\]\].
J. Hisano, S. Matsumoto and M. M. Nojiri, Phys. Rev. Lett. [**92**]{}, 031303 (2004) \[arXiv:hep-ph/0307216\]; J. Hisano, S. Matsumoto, M. M. Nojiri and O. Saito, Phys. Rev. D [**71**]{}, 063528 (2005) \[arXiv:hep-ph/0412403\]; J. Hisano, S. Matsumoto, O. Saito and M. Senami, Phys. Rev. D [**73**]{}, 055004 (2006) \[arXiv:hep-ph/0511118\].
M. Ackermann [*et al.*]{} \[Fermi-LAT collaboration\], Phys. Rev. Lett. [**107**]{}, 241302 (2011) \[arXiv:1108.3546 \[astro-ph.HE\]\].
A. Hryczuk and R. Iengo, arXiv:1111.2916 \[hep-ph\].
A. Charbonnier [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**418**]{}, 1526 (2011) \[arXiv:1104.0412 \[astro-ph.HE\]\].
O. Adriani [*et al.*]{} \[PAMELA Collaboration\], Phys. Rev. Lett. [**105**]{}, 121101 (2010) \[arXiv:1007.0821 \[astro-ph.HE\]\].
C. Evoli, I. Cholis, D. Grasso, L. Maccione and P. Ullio, arXiv:1108.0664 \[astro-ph.HE\].
`http://www.ams02.org/`.
O. Adriani [*et al.*]{} \[PAMELA Collaboration\], Nature [**458**]{}, 607 (2009) \[arXiv:0810.4995 \[astro-ph\]\].
P. Grajek, G. Kane, D. Phalen, A. Pierce and S. Watson, Phys. Rev. D [**79**]{}, 043506 (2009) \[arXiv:0812.4555 \[hep-ph\]\]; G. Kane, R. Lu and S. Watson, Phys. Lett. B [**681**]{}, 151 (2009) \[arXiv:0906.4765 \[astro-ph.HE\]\].
M. Toharia and J. D. Wells, JHEP [**0602**]{}, 015 (2006) \[arXiv:hep-ph/0503175\]; P. Gambino, G. F. Giudice and P. Slavich, Nucl. Phys. B [**726**]{}, 35 (2005) \[arXiv:hep-ph/0506214\].
ATLAS Collaboration, ATLAS-CONF-2011-155.
S. Asai, T. Moroi, K. Nishihara and T. T. Yanagida, Phys. Lett. B [**653**]{}, 81 (2007) \[arXiv:0705.3086 \[hep-ph\]\].
J. Alwall, K. Hiramatsu, M. M. Nojiri and Y. Shimizu, Phys. Rev. Lett. [**103**]{}, 151802 (2009) \[arXiv:0905.1201 \[hep-ph\]\]; M. M. Nojiri and K. Sakurai, Phys. Rev. D [**82**]{}, 115026 (2010) \[arXiv:1008.1813 \[hep-ph\]\].
A. Barr, C. Lester and P. Stephens, J. Phys. G [**29**]{}, 2343 (2003) \[arXiv:hep-ph/0304226\].
S. Asai, T. Moroi and T. T. Yanagida, Phys. Lett. B [**664**]{}, 185 (2008) \[arXiv:0802.3725 \[hep-ph\]\].
`http://hilumilhc.web.cern.ch/HiLumiLHC/`.
`http://www.linearcollider.org/`.
`http://clic-study.org/`.
M. Asano [*et al.*]{}, Phys. Rev. D [**84**]{}, 115003 (2011) \[arXiv:1106.1932 \[hep-ph\]\].
[^1]: See also Ref.[@hep-ph/0605252] for the Polonyi problem in dynamical supersymmetry breaking models.
[^2]: In fact, the gauge coupling constants unify at around $10^{16}$GeV at a few percent level even for a rather large $\mu$-term of $10$–$100$TeV. It should be noted that the scale of the coupling unification is slightly lower than the conventional SSM for $m_{3/2}=10$–$100$TeV about a factor of two or so. Thus, the model predicts a slightly shorter proton lifetime via the so-called dimension six operators, $\tau_p \lesssim 10^{35}$yrs, which is within the reach of the Hyper-Kamiokande Experiment[@Abe:2011ts].
[^3]: If the SUSY breaking sector has a singlet Polonyi field, the so-called Giudice-Masiero mechanism[@Giudice:1988yz] can also generate the $\mu$ and $B$ Higgs mixing parameters of $O(m_{3/2})$. In that case, however, the model suffers from the Polonyi problem[@Polonyi].
[^4]: See discussions on the possible cancellation of the anomaly-mediated gaugino masses[@hep-ph/0409232; @Izawa:2010ym].
[^5]: Hereafter, we use a phase convention where $B\mu_H$ is real and positive.
[^6]: More precisely, we assumed that $\log_{10}\mu_H/m_{3/2}$ and $\log_{10}B/m_{3/2}$ obey the normal distribution with the mean value $0$ and the standard deviation $0.5\times\log_{10} 3$. For a given $\tan\beta$, the Higgs squared masses are determined by $m_{H_{u,(d)}}^2 = - |\mu_H^2| + B\mu_H \cot\beta^{(-1)} $. In the figure, we generated the fixed number of random numbers for each $\tan\beta$. Afterward, we required $|m_{H_{u,d}}^2/m_{3/2}^2|< 5$ so that they are of the order of the gravitino mass. Thus, the ratios of the areas of each histogram roughly represent the relative consistency of the value of $\tan\beta$ in the pure gravity mediation. The figure shows that the model with $\tan\beta = O(10)$ is less consistent as expected.
[^7]: In general, a relative phase between $L$ and $m_{3/2}$ is a free parameter, and hence, the three gauginos have different phases. Such gaugino phases, however, do not cause serious CP-problems, since the Higgsinos as well as the sfermions are expected to be very heavy in the pure gravity mediation model. Interestingly, the relative phase of $O(1)$ may lead to the visible electron electric dipole moment of $d_e/e \sim 10^{-30}$cm[@hep-ph/0409232] for the $\mu$-term in the tens to hundreds TeV range, which can be reached in future experiments[@EDM].
[^8]: Fig.\[fig:L\] shows that $L/m_{3/2}\gtrsim 2.5$ is possible for $\tan\beta\simeq 1$. As we will see from Fig.\[fig:Higgs\], however, the lightest Higgs boson mass of our main concern ($124$GeV$<m_h<126$GeV) requires $m_{3/2} \gg 100$TeV for $\tan\beta \simeq 1$. Thus, the conclusion $m_{3/2}\gtrsim 30$GeV is not changed.
|
---
abstract: |
An iterative algorithm for state determination is presented that uses as physical input the probability distributions for the eigenvalues of two or more observables in an unknown state $\Phi$. Starting form an arbitrary state $\Psi_{0}$, a succession of states $\Psi_{n}$ is obtained that converges to $\Phi$ or to a Pauli partner. This algorithm for state reconstruction is efficient and robust as is seen in the numerical tests presented and is a useful tool not only for state determination but also for the study of Pauli partners. Its main ingredient is the Physical Imposition Operator that changes any state to have the same physical properties, with respect to an observable, of another state.\
\
Keywords: state determination, state reconstruction, Pauli partners, unbiased bases. PACS: 03.65.Wj 02.60.Gf
author:
- 'Dardo M. Goyeneche and Alberto C. de la Torre'
title: 'State determination: an iterative algorithm.'
---
INTRODUCTION
============
At an early stage in the development of quantum mechanics, W. Pauli [@paul] raised the question whether the knowledge of the probability density functions for the position and momentum of a particle were sufficient on order to determine its state. That is, can we determine a unique $\psi(x)$ if we are given $\rho(x)=|\psi(x)|^{2}$ and $\pi(p)=|\phi(p)|^{2}$, where $\phi(p)$ is the Fourier transform of $\psi(x)$? Since position and momentum are the unique independent observables of the system, it was, erroneously, guessed that this Pauli problem could have an affirmative answer. This was erroneous because there may be different quantum correlations between position and momentum that are not reflected in the distributions of position and momentum individually. Indeed, many examples of Pauli partners, that is, different states $\psi_{1}\neq\psi_{2}$ with identical probability distributions $\rho$ and $\pi$, where found. A review of theses issues, with references to the original papers, and the treatment of the problem of state reconstruction for finite and infinite dimension of the Hilbert space, can be found in refs. [@wei1; @wei2]. The general problem of the determination of a quantum state from laboratory measurements turned out to be a difficult one. In this work the “laboratory measurement” means the complete measurement of an observable, that is, the determination of the probability distribution $\varrho(a_{k})$ of the eigenvalues $a_{k}$ of the operator $A$ associated with the observable. Given a state $\Phi$, the probability distribution (assuming non-degeneracy) is given by $\varrho(a_{k})=|\langle\varphi_{k},\Phi\rangle|^{2}$ where $\varphi_{k}$ are the eigenvectors of the operator. The state is not directly observable; what can be measured, are the probability distributions of the eigenvalues of the observables and we want to be able to determine the state $\Phi$ of the system using these distributions. Besides the academic interest of quantum state reconstruction based on measurements of probability distributions, the issue has gained actuality in the last decade in the possible practical applications of quantum information theory [@qinf].
In order to state clearly the problem, let us consider a system described in an $N$ dimensional Hilbert space. The determination of the state requires the determination of $2N-2$ real numbers and a complete measurement of an observable provides $N-1$ equations. With the measurement of two observables (like position and momentum in the Pauli problem) we have the same number of equations as unknowns. However the equations available are not linear and the system of equations will not have, in general, a unique solution. In many practical cases, a minimal additional information (like the sign of an expectation value) is sufficient to determine the state. In this work we will not search the minimal extra information required, but instead, we will add a complete measurement of a third observable. One may think that this massive addition of information will make the system over-determined and that with three complete measurements we should always be able to find a unique state. This is wrong; there are pathological cases where the complete measurement of $N$ observables, that is $N(N-1)$ equations, is *not sufficient* for the determination of a *unique* set of $2(N-1)$ numbers! In the other extreme, if the state happens to be equal to one of the eigenvectors of the observable measured, then, of course, just one complete measurement is sufficient to fix the state. From these two cases we conclude that the choice of the observables to be measured is crucial for the determination of the state; an observable with a probability distribution peaked provides much more information than an observable with uniform distribution. A pair of observables may provide redundant information and we expect that it is convenient to use observables as different as possible; this happens when their eigenvectors build two unbiased bases as is the case, for example, with position and momentum (two bases $\{\varphi_{k}\}$ and $\{\phi_{r}\}$ are unbiased when $|\langle\varphi_{k},\phi_{r}\rangle|=1/\sqrt{N}\ \forall k,r$, that is, every element of one basis has equal “projection” on all elements of the other basis). For this reason, unbiased bases have been intensively studied in the problem of state determination and also in quantum information theory [@unbiaBas1; @unbiaBas2; @unbiaBas3]. The number of mutually unbiased bases that one can define in an $N$ dimensional Hilbert space is not known in general although it can be proved that if $N$ is equal to a power of a prime number, then there are $N+1$ unbiased bases. *Unbiased observables*, those represented by operators whose eigenvectors build unbiased bases, provide independent information; there are however pathological cases where the measurement of several unbiased observables is useless to determine a unique state: assume for instance that the state belongs to a basis that is unbiased to several other mutually unbiased bases associated with the measured observables. In this case all the probability distributions are uniform and the state can not be uniquely determined because there are at least $N$ different states (corresponding to the $N$ elements of the basis to which the state belongs) all generating uniform distributions for the observables. If $N$ is a power of a prime number we could have up to $N$ observables with uniform distributions for $N$ different states. This is the pathological case mentioned before: if there are $N+1$ mutually unbiased bases and we have $M$ unbiased observables with uniform distributions then we have $N(N+1-M)$ Pauli partners, that is, different states having the same distributions.
If we make complete measurements of two or more observables we should be able to determine the state but it will not always be unique because there may be several different states having the same distributions for the measured observables. If we measure three observables, the mathematical problem would be to solve a set of $3N-3$ nonlinear equations to determine $2N-2$ numbers. One could blindly apply some numerical method to find the solution. Instead of this, we present in this work an iterative method that is physically appealing because it involves the imposition of physical data to Hilbert space elements that are approaching the solution. Another advantage of this algorithm is that it does not involves the solution of a system of equations and therefore when we change the number of observables measured or the dimension of the Hilbert space, we only have to make a trivial change in the algorithm. We will test the algorithm numerically by assuming an arbitrary state and two, three or four arbitrary observables, with them we generate the data corresponding to the distributions of the observables in the chosen state, and then we run the algorithm and we see how efficiently it returns the chosen state.
A METRIC FOR STATES
===================
In order to study the convergence of an iterative algorithm for the determination of a state, we will need a concept of *distance* that can tell us how close we are from the wanted solution. This criteria of approach can be applied in the space of states or in the space of probability distributions. In the first case we want to know how close a particular state is from the state searched, that is, we need a *metric in the space of states*. In the other case a particular state generates probability distributions for some observables and we want to know how close these distribution are from the corresponding distributions generated by the state searched. In this second case we need a *metric in the space of distributions*. The relation between these two distances in two different spaces has been studied for several choices of distances [@plast]. However the application of some of theses “distances” that do not satisfy the mathematical requirements of a “metric” (positivity, symmetry and triangular inequality) in an iterative algorithm is questionable. In this work we use a metric in the Hilbert space of states in order to study the convergence of the algorithm but we also compare the final probability distributions with the corresponding distributions used as physical input for the algorithm because, as was explained before, there are cases of different states generating the same probability distributions. The usual Hilbert space metric induced by the norm, itself induced by the internal product, $$\label{metrind}
\delta(\Psi,\Phi)=\|\Psi-\Phi\|=\sqrt{\langle\Psi-\Phi,\Psi-\Phi\rangle}$$ is not an appropriate metric for states because the states are nor represented by Hilbert space elements but by *rays*, that are sets of Hilbert space elements with an arbitrary phase. That is, a state is given by $$\label{sta}
R_{\Psi}=\{e^{i\alpha}\Psi\ |\ \forall \alpha\ ,\ \|\Psi\|=1\}$$ The Hilbert space element $\Psi$ is a *representant* of the ray and it is common practice in quantum mechanics to say that the state is given by $\Psi$. However when we deal with distance between states we can not take the induced metric mentioned before, because this metric for two Hilbert space elements, $e^{i\alpha}\phi$ and $e^{i\beta}\phi$, belonging to the *same* ray, that is, belonging to the same state, does not vanishes. A correct concept of distance between states is given by the distance between sets $$\label{disray}
d(R_{\Psi},R_{\Phi})=\min_{\alpha,\beta}\delta(e^{i\alpha}\Psi,e^{i\beta}\Phi)\
.$$ The minimization can be performed in general and we obtain $$\label{disray1}
d(R_{\Psi},R_{\Phi})=\sqrt{2}\sqrt{1-|\langle\Psi,\Phi\rangle|}\ .$$ We compare this result with $\delta(\Psi,\Phi)=\sqrt{2}\sqrt{1-\Re\langle\Psi,\Phi\rangle}$ and we conclude that $d(R_{\Psi},R_{\Phi})\leq\delta(\Psi,\Phi)$ and therefore every sequence converging in the induced metric is also convergent in the ray metric used here.
In order to have a rigourous concept of convergence we must check that the distance between states given above in Eqs.(\[disray\],\[disray1\]) is really a metric (in general, for arbitrary sets, the distance between sets is not always a metric since one can easily find examples that violate the triangle inequality). The requirement of symmetry and positivity are trivially satisfied but to prove that this distance satisfies the triangle inequality is not trivial. However we can be sure that the distance between states is a metric because, in this particular case where the sets are rays, the distance between rays has the same value as the Hausdorff distance and one can prove that the Hausdorff distance is a metric [@haus]. The Hausdorff distance between two sets $X$ and $Y$ is defined by $$\label{hausdm}
d^{H}(X,Y)=\max\left\{\sup_{x\in X}d(x,Y)\ ,\ \sup_{y\in
Y}d(y,X)\right\}\ .$$ As a final comment in this section, notice that the square root in Eq.(\[disray1\]) is a nuisance but it can not be avoided because expressions like $1-|\langle\Psi,\Phi\rangle|$ or $1-|\langle\Psi,\Phi\rangle|^{2}$ are *not* metrics. In order to simplify the notation, in what follows, we will denote the distance between rays $d(R_{\Psi},R_{\Phi})$ simply by $d(\Psi,\Phi)$.
THE PHYSICAL IMPOSITION OPERATOR
================================
A state, or a Hilbert space element, contains encoded information about all the observables of the system. Given a state $\Phi$, the probability distribution for an observable $A$ is given by $\varrho(a_{k})=|\langle\varphi_{k},\Phi\rangle|^{2}$ where $\varphi_{k}$ are the eigenvectors of the operator $A$ associated with the observable corresponding to the eigenvalue $a_{k}$. Given any state $\Psi$, we can impose to this state the same distribution that the observable $A$ has in the state $\Phi$ by means of an operator, the *Physical Imposition Operator*, $T_{A\Phi}$ that involves the expansion of $\Psi$ in the basis $\{\varphi_{k}\}$ of $A$ and a change in the modulus of the expansion coefficients. That is $$\label{TAfi}
T_{A\Phi}\Psi = \sum_{k}|\langle\varphi_{k},\Phi\rangle|
\frac{\langle\varphi_{k},\Psi\rangle}{|\langle\varphi_{k},\Psi\rangle|}
\ \varphi_{k}\ .$$ If $\langle\varphi_{k},\Psi\rangle=0$ we assume zero phase, that is $\langle\varphi_{k},\Psi\rangle/|\langle\varphi_{k},\Psi\rangle|
= 1$. The moduli of the expansion coefficients are changed in order to impose the distribution of the observable $A$ in the state $\Phi$ but the phases are retained and therefore some information of the original state $\Psi$ is kept in the phases. Although the numerical treatment of this operator is straightforward, its mathematical features are not simple. The operator is idempotent $T^{2}=T$, it has no inverse and it is not linear but $T(c\Psi)=(c/|c|)T(\Psi)$. Furthermore the operator is bounded because $\|T\Psi\|=\|\Phi\|=1$. The fix points of this nonlinear application is the set of states that have the same distribution for the observable $A$ as the state $\Phi$.
We will use this operator in order to develop an iterative algorithm for the determination of a state $\Phi$ using as physical input the distribution of several observables in this state. It is therefore interesting to study whether this operator, applied to an arbitrary Hilbert space element $\Psi$, brings us closer to the state $\Phi$ or not. For this we can compare the distance $d(\Psi,\Phi)$ with the distance $d(T_{A\Phi}\Psi,\Phi)$ for some given observable $A$ and some state $\Phi$. Let us then define an observable $A$ by choosing its eigenvectors $\{\varphi_{k}\}$ (a basis) in a three dimensional Hilbert space, $N=3$, and in this space let us take an arbitrary state $\Phi$. Now we consider a large number (8000) of randomly chosen states $\Psi$ and draw a scatter plot of the distances of this state to $\Phi$ before and after applying the imposition operator $T_{A,\Phi}$. In Figure 1 we see that there are more points below the diagonal, showing cases where the imposition operator brings us closer to the state but there are also many cases where the operator take us farther away from the searched state. We will later see that this has the consequence that the iterative algorithm will not converge for every starting point.
The imposition operator will shift the state $\Psi$ some distance $d(T_{A\Phi}\Psi,\Psi)$ that is smaller than the total distance to the state $d(\Psi,\Phi)$. That is, there is no “overshoot” that could undermine the convergence of the iterative algorithm. In order to prove this, consider the internal product $$\begin{aligned}
\nonumber
\left\langle T_{A\Phi}\Psi,\Psi\right\rangle&=& \left\langle
\sum_{k}|\langle\varphi_{k},\Phi\rangle|
\frac{\langle\varphi_{k},\Psi\rangle}{|\langle\varphi_{k},\Psi\rangle|}
\ \varphi_{k}\ ,\ \sum_{r}\langle\varphi_{r},\Psi\rangle
\ \varphi_{r}\right\rangle\\ \nonumber
&=&\sum_{k}|\langle\varphi_{k},\Phi\rangle|\
|\langle\varphi_{k},\Psi\rangle|\ =
\sum_{k}|\langle\Phi,\varphi_{k}\rangle\langle\varphi_{k},\Psi\rangle|\ \\
&\geq&|\sum_{k}\langle\Phi,\varphi_{k}\rangle\langle\varphi_{k},\Psi\rangle|\
= |\langle\Phi,\Psi\rangle|\ .\end{aligned}$$ Now, using this inequality in the definition of distance in Eq.(\[disray1\]) we get$$\label{dist1}
d(T_{A\Phi}\Psi,\Psi)\leq d(\Psi,\Phi)\ .$$ We can notice in Figure 1 that there is a bound for the distance $d(T_{A\Phi}\Psi,\Phi)$ at some value smaller than the absolute bound for the distance $\sqrt{2}$. We will see that this bound appears when the state $\Phi$ is chosen close to one of the eigenvectors of $A$. From the definition of the distance and of the imposition operator it follows easily that the distance of $T_{A\Phi}\Psi$ to any element of $\{\varphi_{k}\}$ is the same as the distance of $\Phi$ to the same element. That is, $$\label{dist2}
d(T_{A\Phi}\Psi,\varphi_{k}) =d(\Phi,\varphi_{k})\ \forall k\
,$$ so that $T_{A\Phi}\Psi$ is something like a “mirror image” of $\Phi$ reflected on $\{\varphi_{k}\}$. We can now use this in order to derive the bound mentioned. Consider the triangle inequality $d(T_{A\Phi}\Psi,\Phi)\leq
d(T_{A\Phi}\Psi,\varphi_{k})+d(\Phi,\varphi_{k})$. Using Eq.(\[dist2\]), we get $d(T_{A\Phi}\Psi,\Phi)\leq 2
d(\Phi,\varphi_{k})$. Now we specialize this inequality for the value of $k$ that minimizes the right hand side, that is, the value of $k$ that maximizes $|\langle\varphi_{k},\Phi\rangle|$ or equivalently $\sqrt{\rho(a_{k})}$. Then we have $$\label{dist3}
d(T_{A\Phi}\Psi,\Phi)\leq 2 \min_{k}d(\Phi,\varphi_{k})
=2\sqrt{2}\sqrt{1-\max_{k}\sqrt{\rho(a_{k})}}\ .$$
If the state $\Phi$ is close enough to one of the eigenvectors of $A$, the corresponding maximum value of the distribution can be larger than $9/16$ and the bound derived is smaller than the absolute bound $\sqrt{2}$. With increasing dimension $N$ of the Hilbert space, the probability that a randomly chosen state is close to one of the basis elements decreases.
The physical imposition operator modifies the moduli of the expansion coefficients but leaves the phases unchanged. The reason for choosing this definition is that the moduli of the coefficients are measured in an experimental determination of the probability distribution of the eigenvalues of an observable and therefore this operator provides a way to impose physical properties to a state. It is unfortunate that the phases of the expansion coefficients are not directly accessible in an experiment because we could use the knowledge of the phases in a much more efficient algorithm. In a sense that will become clear later, the phases have more information about the state than the moduli. In order to clarify this let us define a *Phase Imposition Operator* $P_{A\Phi}$ that leaves the moduli of the expansion coefficients unchanged but imposes the phases of the state $\Phi$. That is $$\label{RAfi}
P_{A\Phi}\Psi = \sum_{k}|\langle\varphi_{k},\Psi\rangle|
\frac{\langle\varphi_{k},\Phi\rangle}{|\langle\varphi_{k},\Phi\rangle|}
\ \varphi_{k}\ .$$ The same as was done before, we study how efficiently this operator approaches to the state $\Phi$. In Figure 2 we see the corresponding scatter plot for the same operator and states of those in Fig.1, that shows that in *all* cases the application of this operator brings us closer to the wanted state. One can indeed prove that $d(P_{A\Phi}\Psi,\Phi)\leq
d(\Psi,\Phi)$ considering the internal product $$\begin{aligned}
\nonumber
\left\langle P_{A\Phi}\Psi,\Phi\right\rangle&=& \left\langle
\sum_{k}|\langle\varphi_{k},\Psi\rangle|
\frac{\langle\varphi_{k},\Phi\rangle}{|\langle\varphi_{k},\Phi\rangle|}
\ \varphi_{k} ,\ \sum_{r}\langle\varphi_{r},\Phi\rangle
\ \varphi_{r}\right\rangle\\ \nonumber
&=&\sum_{k}|\langle\varphi_{k},\Psi\rangle|\
|\langle\varphi_{k},\Phi\rangle|\ =
\sum_{k}|\langle\Psi,\varphi_{k}\rangle\langle\varphi_{k},\Phi\rangle|\ \\
&\geq&|\sum_{k}\langle\Psi,\varphi_{k}\rangle\langle\varphi_{k},\Phi\rangle|\
= |\langle\Psi,\Phi\rangle|\ .\end{aligned}$$ Using this inequality in the definition of distance in Eq.(\[disray1\]) we get the inequality above. As said before, if we had physical information about the phases of the expansion coefficients, we could devise a very efficient algorithm. Unfortunately we don’t have experimental access to the phases and this, in principle interesting, operator will no be further studied here.
THE ALGORITHM FOR STATE DETERMINATION
=====================================
In this section we will investigate an algorithm for state determination that uses as physical input the knowledge provided by the complete measurement of several observables. These measurements provide the probability distributions for the eigenvalues in the unknown state $\Phi$. In other words, we assume that we know the physical imposition operators $T_{A\Phi},T_{B\Phi},T_{C\Phi}\dots$ for several observables. The algorithm basically consists in the iterative application of the physical imposition operators to some arbitrary initial state $\Psi_{0}$ randomly chosen. Applying the operator $T_{A\Phi}$ to the initial state $\Psi_{0}$, we will get closer to $\Phi$ (although not always) but a second application of the operator is useless because $T_{A\Phi}$ is idempotent. Then we use another operator for a closer approach, say $T_{B\Phi}$, and another one afterwards, until all physical information is used; then we start again with $T_{A\Phi}$. That is, we calculate the iterations $\Psi_{1},\Psi_{2},\Psi_{3}\cdots$ given by $\Psi_{n}=(\dots
T_{B\Phi}T_{A\Phi})^{n}\Psi_{0}$ and the convergence $\Psi_{n}\rightarrow\Phi$ is checked comparing the physical input, that is, the distributions associated with the observables $A,B,C,\dots$, with the corresponding distributions generated in the state $\Psi_{n}$.
In order to check the efficiency of the algorithm numerically, we choose a state $\Phi$ at random and with it we generate the distributions corresponding to some observables that we use as input in the algorithm. Calculating the distance $d(\Psi_{n},\Phi)$ we study how efficiently the algorithm returns the initial state $\Phi$. There are cases where the algorithm converges to a state $\Phi'$ different from $\Phi$ but having the same physical distributions, that is, to a Pauli partner of $\Phi$. An interesting feature of the algorithm is that we can span the whole Hilbert space by choosing the starting states $\Psi_{0}$ randomly and the algorithm delivers many, if not all, Pauli partners. We can not be sure that all Pauli partners are found because some of them could correspond to a starting state $\Psi_{0}$ belonging to a set of null measure that will not be necessarily “touched” in a random sampling of the Hilbert space. This seems to be quite unlikely but it can not be excluded. In this way, the algorithm presented is not only a numerical tool for state determination but is also a useful tool for the theoretical investigation of the appearance of Pauli partners. An example of this is presented below.
The algorithm is very efficient; however there are some starting states $\Psi_{0}$ where the algorithm fails to converge. It was not surprising to find these failures because, as was suggested in Fig. 1, the physical imposition operator sometimes take us farther away from the wanted state. We are informed of this failure because the distributions used as input are not approached in each iteration. In the case of a failure, we can simply restart the algorithm with a different initial state $\Psi_{0}$ or restart with another initial state orthogonal to the one that failed. In this last case the probability of a repeated failure is much reduced and therefore it is a convenient choice. The appearance of a failure depends strongly on the choice of observables used to determine the state. If we use three unbiased observables we very rarely found a failure, in less than 1% of the cases, but if we use three random observables (see below) 40% of the randomly chosen starting states $\Psi_{0}$ fail but only 10% of these fail again if we restart with an orthogonal state. In the case of four angular momentum observables in four arbitrary directions we had to restart the algorithm in some 10% of the cases. The appearance of failures also depends on the shape of the distributions: when one of the distributions is peaked, that is, the maximum value of the distribution $\rho(a_{k})$ has a large value for some $k$, the application of the corresponding imposition operator bring us close to the wanted state as can be seen in Eq.(\[dist3\]), and Figure 1 shows that then the algorithm has better convergence and no failures are found. This has been confirmed in the numerical tests.
The convergence to the state $\Phi$, or to a Pauli partner, was tested numerically in several Hilbert space dimensions and for different choice of observables. These choices were random in some cases, that is, their associated orthonormal bases are randomly chosen, and in other cases we used physically relevant observables like angular momentum or position and momentum. Position and momentum observables are usually represented by unbound operators in infinite dimensional Hilbert spaces; however there are also realizations of these observables in finite dimensions, for instance in a cyclic lattice, where they are represented by unbiased operators [@findim1; @findim2]. In general the operators $T_{A\Phi},\ T_{B\Phi},\cdots $ do not commute and the iteration of $(\cdots T_{B\Phi}T_{A\Phi})^{n}$ and $(\cdots T_{A\Phi}T_{B\Phi})^{n}$ are not necessarily equal. The algorithm was tested with several different choices in the ordering of the noncommuting physical imposition operators and also with random ordering and it turned out that the convergence of the algorithm is not much affected by the different orderings. The algorithm is robust under the noncommutativity of the observables.
The physical imposition operator $T_{A\Phi}$ is idempotent so it is useless to apply it more than once (successively) in an attempt to approach the state $\Phi$. Clearly, the complete measurement of just one observable is not sufficient to determine the state, except in the trivial case when the state happens to be equal to one of the eigenvector of the operator. Therefore we consider the information provided by *two* observables $A$ and $B$ (for two unbiased observables, like $X$ and $P$, this is precisely the Pauli problem). We studied then the convergence of $\Psi_{n}=(T_{B\Phi}T_{A\Phi})^{n}\Psi_{0}$ towards $\Phi$ or to a Pauli partner, for an arbitrary $\Psi_{0}$. In a three dimensional Hilbert space, $N=3$, we applied the algorithm in several cases: for $A$ and $B$ random, unbiased (that is, of the type $X$ and $P$) and also for angular momentum operators $J_{x},J_{y}$. As was expected, in all these cases the algorithm returned several Pauli partners. Choosing the starting state $\Psi_{0}$ randomly (uniform distributed in the Hilbert space) we found that all Pauli partners found are accessed with similar frequency. As was mentioned before, we can not be sure that the algorithm will deliver *all* partners, however we may be confident that this may be so because in one particular case, where we can calculate exactly the number of partners, the algorithm returns them all. The particular case is the, so called, pathological case where we have uniform distributions for $M$ observables that correspond to $N(N+1-M)$ partners. For several combinations of $N$ and $M$, the algorithm delivered all partners.
Next we studied the case with *tree* operators providing physical information to determine the state (also with Hilbert space dimension $N=3$). We studied then the iteration $\Psi_{n}=(T_{A\Phi}T_{B\Phi}T_{C\Phi})^{n}\Psi_{0}$. When two of the observables are unbiased (of the type $X$ and $P$) we always obtained a unique state, regardless of the choice of the third operator: either unbiased or of the type $X+P$ (biased to the first two), or random. This means that the information provided by two unbiased observables *almost* fixes the state and any other additional information is sufficient to find a unique state. However we know that in the, so called, pathological cases we must find Pauli partners and the algorithm does indeed finds them. In these pathological cases, the distributions corresponding to three unbiased operators are all uniform (that is, the generating state $\Phi$ is unbiased to all three bases). Spanning the Hilbert space by choosing $\Psi_{0}$ randomly as a starting state for the algorithm, we converge to all $N(N-2)=3$ Pauli partners with almost equal probability. The pathological case was also studied with two unbiased observables with uniform distributions. In this case the algorithm also delivered all $N(N-1)=6$ Pauli partners with similar probability.
For biassed operators, like angular momentum operators $J_{x},J_{y},J_{z}$ and also for random $A, B, C$ we sometimes found Pauli partners showing that, although we have more equations (six) than unknowns (four), the nonlinearity of the problem may cause non-unique solutions. The appearance of Pauli partners in the angular momentum case is consistent with the result reported by Amiet and Weigert [@wei4]. An inspection of the numerical results for these Pauli partners revealed a symmetry that could also be proved analytically: given a state $\phi$ (in the basis of $J_{z}$) with the corresponding distributions for the observables $J_{x},J_{y},J_{z}$, $$\label{partn}
\phi= \left(\begin{array}{c}
a \\
b \\
c\\
\end{array}\right) \ ,$$ (it is always possible to fix $b$ real an nonnegative) when $b>0$ and $(a^{\ast}+c)\neq 0$, if any one of the following conditions is satisfied: $$\begin{aligned}
\Re(a) &=& -\Re(c)\ , \\
\Im(a) &=& \Im(c)\ , \\
|a|&=& |c|\ , \\
\Im(ac) &=& 0\ ,\end{aligned}$$ then there is a Pauli partner $\phi'$ $$\label{partn1}
\phi'= \left(\begin{array}{c}
a' \\
b\\
c'\\
\end{array}\right)\ ,$$ where $$\label{partn2}
a'= a^{\ast}\frac{(a+c^{\ast})}{(a^{\ast}+c)}\ \ ,\
c'= c^{\ast}\frac{(a^{\ast}+c)}{(a+c^{\ast})} \ .$$ If $b=0$ we can make $a$ real and positive and then $a'=a\ ,\
c'=c^{\ast}$, and finally if $(a^{\ast}+c)= 0 $ then $c'=-a'^{\ast}$, where $a'$ can take three values: $-a\ ,\
ia^{\ast}\ , \ -ia^{\ast}$. Spanning the Hilbert space with generator states $\phi$ randomly chosen, in some $1\%$ of the cases the algorithm returned the state $\phi$ and a Pauli partner $\phi'$ covering all possibilities mentioned above. Notice that the ability of the algorithm to detect Pauli partners is due to the limited precision of the numerical procedure. Among all possible states $\phi$ of the system, only a few of them have Pauli partners, more precisely, the set of states with Pauli partners has null measure and if we had infinite precision, we would never find partners by random sampling of the Hilbert space. Because of the limited precision of the algorithm, all points in the Hilbert space within a small environment are equivalent and therefore the sets of points with null measure can be accessed in a random sampling of the Hilbert space. We have found indeed that if we become more restrictive with conditions of convergence we need more tries in order to detect partners. Usually the limited precision is considered a drawback however in this case it is an advantage that allows us to detect sets of null measure.
With the information provided by the complete measurement of *four* operators $A, B, C, D$ we iterated $\Psi_{n}=(T_{A\Phi}T_{B\Phi}T_{C\Phi}T_{D\Phi})^{n}\Psi_{0}$ and we found unique states, not only when two of them are unbiased (consistent with the result obtained with three operators), but also in the case of random operators or angular momentum in *arbitrary* directions $J_{r},J_{s},J_{t},J_{u}$. Of course, in this case of excessive physical information we could ignore one of the observables and determine the state with only three of them. However not all the Pauli partners found with three observables will have the correct distribution for the fourth one and therefore the use of all observables may be needed for a unique determination of the state. In this case the number of equations, eight if $N=3$, uniquely determine the four unknowns in spite of the nonlinearity. Notice that the convergence of the algorithm in this case is not trivial. It is true that we are using much more information than what is needed (except for the pathological cases that can only appear if $N>3$) but we must consider that we are using this excessive information in an iterative and approximative algorithm and therefore the consistency of the data in the final state does not necessarily cooperates in the iterations. The fact that the over-determined algorithm converges is a sign of its robustness.
The algorithm converges in a very efficient way, close to exponential, as we see in Figure 3 where the distance to the converging state is given as a function of the number of iterations for the case of three unbiased operators with $N=3$. This is a typical example showing the exponential convergence where the distance to the solution is divided by 4.5 in each iteration. However the speed of convergence, that is, the slope in the figure, is not always the same and depends on the operators used and on the generating state $\Phi$. For higher Hilbert space dimensions we obtained similar behaviour. For three operators with physical relevance, like angular momentum or unbiased operators, the distance to the target state was divided by 2-3 in each iteration in Hilbert spaces with dimensions up to 20. In the fastest case found, the distance was divided by 126 in each iteration, approaching the solution within $10^{-7}$ after three iterations. With random operators the approach was not always so fast and in some unfavourable cases up to 100 iterations were required (this took only a fraction of a second in an old PC).
CONCLUSION
==========
In this work we defined the *Physical Imposition Operator* $T_{A\Phi}$ that imposes to any state $\Psi$ the same distribution that the eigenvalues of an observable $A$ have in a state $\Phi$. For this operator we don’t need to know the state $\Phi$ but we just need the probability distribution for the observable $A$ in this state, that can be obtained from a complete measurement. Considering two or more observables, we applied their corresponding physical imposition operators iteratively to an arbitrary initial state $\Psi_{0}$ and obtained a succession of states $\Psi_{n}$ that converge to the unknown state $\Phi$, or to a Pauli partner having the same distribution for the observables. Varying the initial state we can find the Pauli partners but we can not be sure that all of them are obtained although this is very likely because in the cases where we can know exactly all the Pauli partners, the algorithm finds them all and therefore it becomes a useful tool for the investigation of Pauli partners. This algorithm for state determination was tested numerically for different sets of observables and different dimensions of the Hilbert space and it turned out to be quite an efficient and robust way to determine a quantum state using complete measurements of several observables.
Acknowledgements
================
We would like to thank H. de Pascuale for his help on mathematical questions. This work received partial support from “Consejo Nacional de Investigaciones Cient[í]{}ficas y T[é]{}cnicas” (CONICET), Argentina. This work, part of the PhD thesis of DMG, was financed by a scholarship granted by CONICET.
[99]{} W. Pauli. “Quantentheorie” Handbuch der Physik [**24**]{}(1933). S. Weigert. “Pauli problem for a spin of arbitrary length: A simple method to determine its wave function” Phys Rev. A [**45**]{}, 7688-7696 (1992). S. Weigert. “How to determine a quantum state by measurements: The Pauli problem for a particle with arbitrary potential” Phys Rev. A [**53**]{}, 2078-2083 (1996). M. Keyl, “Fundamentals of quantum information theory” Phys. Rep. A **369**, 431-548 (2002). I. D. Ivanovic. “Geometrical description of quantum state determination” Journal of Physics A, 14, 3241-3245 (1981). W.K. Wootters, and B.D. Fields, “Optimal state-determination by mutually unbiased measurements” Ann. Phys. 191, 363-381 (1989). S. Bandyopadhyay, P. Boykin, V. Roychowdhury, and F. Vatan, “A new proof for the existence of mutually unbiased bases” Algorithmica 34, 512 (2002). arXiv: quant-ph/0103162. A. Majtey, P. W. Lamberti, M. T. Martin and A. Plastino, “Wootter’s distance revisited: a new distinguishability criterium” Phys. Lett. A **32**, 413-419 (2005). arXiv: quant-ph/0408082 E. Lages Lima, *Espaços M[é]{}tricos*, Projeto Euclides, IMPA, ISBN: 85-244-0158-3 3Ed. Rio de Janeiro, 2003. A. C. de la Torre, D. Goyeneche. “Quantum mechanics in finite dimensional Hilbert space” Am. J. Phys. [**71**]{}, 49-54, (2003). A. C. de la Torre, H. M[á]{}rtin, D. Goyeneche. “Quantum diffusion on a cyclic one dimensional lattice” Phys. Rev. E [**68**]{}, 031103-1-9, (2003). J. P. Amiet, S. Weigert. “Reconstructing a pure state of a spin s through three Stern-Gerlach measurements” J. Phys. A: Math. Gen. **32**, 2777-2784 (1999).
FIGURE CAPTIONS
===============
FIGURE 1. Scatter plot of the distances $d(\Psi,\Phi)$ and $d(T_{A\Phi}\Psi,\Phi)$ for 8000 random initial states $\Psi$. Points below the diagonal indicate cases where $T_{A\Phi}$ brings $\Psi$ closer to $\Phi$.\
\
\
FIGURE 2. Scatter plot of the distances $d(\Psi,\Phi)$ and $d(P_{A\Phi}\Psi,\Phi)$ for the same operator and states as in Figure 1. Notice that the Phase Imposition Operator $P_{A\Phi}$ always approaches the state $\Phi$.\
\
\
FIGURE 3. Distance from the state $\Psi_{n}=(T_{A\Phi}T_{B\Phi}T_{C\Phi})^{n}\Psi_{0}=T^{n}\Psi_{0}$ to the state $\Phi$ after $n$ iterations, showing exponential convergence of the algorithm for $A,B,C$ unbiased operators in a three dimensional Hilbert space.
|
---
author:
- |
Liang-Jian Zou$^{a}$, Qing-Qi Zheng$^{a,b}$, H. Q. Lin$^{c}$\
[*$^a$Institute of Solid State Physics, Academia Sinica, P.O.Box 1129, Hefei 230031, China*]{}\
[*$^b$State Key Lab of Magnetism, Institute of Physics, Academia Sinica, Beijing, China*]{}\
[*$^c$Department of Physics, Chinese University of Hong Kong, Shatin, N.T. Hong Kong, China* ]{}\
title: |
**Ground State of the Double Exchange Model\
**
---
22.0cm 16.0cm -40pt -40pt
[**Abstract**]{}
We investigate the electronic correlation effect on the ground-state properties of the double exchange model for manganites by using a semiclassical approach and the slave-boson technique. It is shown that magnetic field has a similar effect on the canted angle between manganese spins as doping concentration does, and the canted angle exhibits weak dependence on the Coulomb interaction. The possibility of phase separation in the present model is also discussed. In the slave-boson saddle-point approximation in the ferromagnetic metallic regime, the dependence of the magnetization and the Curie temperature on the doping concentration exhibits maxima near 1/3 doping. These results agree with experimental data and suggest that the electronic correlation plays an important role for understanding the ground-state properties of manganites.\
PACS No. 75.10.-b, 75.25.+z, 75.70.Pa
It is essential to clarify the ground state and magnetic phase diagram for elucidating the microscopic mechanism of the colossal magnetoresistance (CMR) in lanthanum manganese. The ground state and the magnetic phase diagram of lanthanum manganese at low doping concentration in low temperature are still controversial, though some efforts \[1-8\] have been devoted to it. In 1950, Zener \[5\] proposed a double exchange (DE) model to explain the electrical conduction and the ferromagnetism (FM) of doped lanthanum manganese. Later Anderson and Hasegawa \[6\] derived the DE energy for a pair of Mn ions and showed that in such a system, the DE interaction tends to align the spins of Mn ions parallel and the DE energy is proportional to cos($\theta_{ij}/$2), not to cos($\theta_{ij}$) as in the Heisenberg model, here $\theta_{ij}$ denotes the angle between spins [**S**]{}$_{i}$ and [**S**]{}$_{j}$. In 1960, De Gennes \[7\] generalized their results to the case with finite doping concentration. He assumed that the total DE energy is proportional to $cos(\theta_{ij}/2)$, and showed that the Mn spins in the case of finite doping are ferromagnetically ordered but canted by an angle $\theta$ that depends on the carrier concentration before a critical concentration x$_{c}$. Since then the concept of canted ferromagnet or antiferromagnet was accepted, but not confirmed definitively by early experiments \[8\]. In recent experiments some researchers declared that there exists canted structure \[4,9\], but negative results were also reported. Schiffer et al. \[1\] reported that at low doping (0$<x<$0.2), La$_{1-x}$Ca$_{x}$MnO$_{3}$ is ferromagnetically ordered, whereas Jonker and Van Santan’s early report \[2\] suggested an antiferromagnetic order. Martin et al. \[4\] showed that La$_{1-x}$Sr$_{x}$MnO$_{3}$ is spin-canted for 0$<x<$0.1, and ferromagnetic ordered for 0.1$<x<$0.2. These reports on the low-temperature low-doping magnetic phase diagram do not agree with each other. Thus it is necessary to study the DE model in details to clarify the magnetic structure in low-doping regime.
Both the early and the recent experiments \[1-4\] have shown that in La$_{1-x}$R$_{x}$MnO$_{3}$ (R=Ca, Sr), the magnetization and the Curie temperature exhibit maxima around x=1/3. Theoretically, these observations have not been explained satisfactory. Varma \[10\] estimated that the maximum of the Curie temperature appears at 1/2 doping, Xing and Shen \[11\] also showed that the zero-temperature magnetization reaches its maximum near 1/2 doping. Another interesting problem is how the magnetic field affects the magnetic structure, since the resistivity of doped lanthanum manganeses is changed by several orders of magnitude under the external magnetic field, such a huge change might be related to the variation of the magnetic structure modulated by magnetic field. Furthermore, the role of electronic correlation was taken into account lightly in previous studies \[5-8\], since in the primary DE model it only includes the Hund’s coupling between conduction electrons and the core spins but not the Coulomb interaction among conduction electrons. A clear picture of the ground state magnetic properties is needed in order to have a coherent understandings of these phenomena in manganites. In the present paper, we first derive the DE energy in the presence of the Coulomb interaction and the magnetic field, then discuss doping dependence of the mean-field ground state energy, the magnetization and the Curie temperature in ferromagnetic metallic regime in the strong correlation limit.\
[**I. Diagonalization in Momentum Space**]{}.
The electronic states in doped lanthanum manganese have been depicted in many papers \[5-8,12\], in the presence of Coulomb interaction and magnetic field, the model Hamiltonian can be written as a summation of two parts: the double exchange interaction $H_{DE}$ and the superexchange interaction $H_{m}$ $$H=H_{DE}+H_{m}$$ $$H_{DE} =\sum_{<ij>\sigma} t_{ij} d^{\dag}_{i \sigma}d_{j \sigma}
+\frac{U}{2} \sum_{i\sigma} n_{i\sigma}n_{i\bar{\sigma}}
- J_{H} \sum_{i\mu \nu} {\bf S}_{i} \cdot d^{\dag}_{i \mu}
\mbox{\boldmath $\sigma$}_{\mu \nu} d_{i\nu}$$ $$H_{m} = - g\mu_{B}B\sum_{i} S^{z}_{i}
+\sum_{<ij>}{\it A}_{ij} {\bf S}_{i} \cdot {\bf S}_{j} ~,$$ where the three d electrons of Mn ions are in the t$_{2g}$ state at site R$_{i}$ and they form a localized core spin ${\bf S}_{i}$, d$^{\dag}_{i \sigma}$ creates a mobile electron in the e$_{g}$ band at site R$_{i}$ with spin $\sigma$, t$_{ij}$ denotes effective hopping matrix element of the mobile electrons to its nearest neighbor, U denotes the on-site Coulomb interaction among mobile electrons, and $J_{H}$ represents the Hund’s coupling between the local spins and the mobile electrons, $J_{H} \gg zt/S$ as required by the DE mechanism. In Eqs.(1) and (2), $< \cdots >$ indicates that only the nearest neighbor interaction is considered. In Eq.(2), g$\mu_{B}$ represents the effective magnetic moment of local spin, [**B**]{} represents the external magnetic field, and the last term represents the superexchange interaction between Mn ions, [*A*]{}$_{ij}$ denotes the superexchange interaction constant which is negative (-$A^{'}$) for ${\bf R}_{i}$ and ${\bf R}_{j}$ on the [*ac*]{} plane and positive ($A$) for ${\bf R}_{i}$ and ${\bf R}_{j}$ on the [*b*]{}-axis. The Jahn-Teller effect and electron-phonon interaction are not included here.
To start, we assume ${\bf S}_{i}$ are classical spins in this section, which corresponds to the following substitution: $$S^{z}_{i} = S cos(\theta),~~~ S^{\pm}_{i}=S e^{\pm i{\bf Q \cdot R}_{i}}
sin(\theta)$$ here [**Q**]{}=(0,$\pi/b$,0), $\theta$ is the canted angle and 2$\theta$ the angle between two spins. For small doping $LaMnO_{3}$, the carrier is hole, translating the electron representation into hole representation, $ h^{+} $ the model Hamiltonian can then be expressed in momentum space: $$\begin{aligned}
H &=& \sum_{k}[-g\mu_{B}B S cos(\theta)
- 4{\it A^{'}}S^{2} + 2{\it A}S^{2}cos(2\theta)] \nonumber \\
&+& \sum_{k\sigma} [ ( -\epsilon_{k} + U<n_{\bar{\sigma}}>
+\sigma J_{H}S )h^{\dag}_{k\sigma}h_{k\sigma}
+ J_{H}S sin(\theta) ( h^{\dag}_{k+Q\uparrow} h_{k\downarrow}
+ h^{\dag}_{k\downarrow} h_{k+Q\uparrow} ) ]
\label{H_k}\end{aligned}$$ where $\epsilon_{k\sigma}=zt\gamma_{k}$ denotes dispersion of holes, $\gamma_{k} = (1/3) ( cos k_x + cos k_y + cos k_z ) $ is the structure factor. In this section the Coulomb interaction is treated by the Hatree-Fock approximation. Diagonalization of the hole part in Eq. (4) gives rise to two subbands: $$E_{k\sigma} = U<n_{\bar{\sigma}}> \pm
\sqrt{\epsilon_{k\sigma}^{2}+(J_H S)^{2}+2J_H S\epsilon_{k\sigma}cos(\theta)}
\label{E_k}$$ A similar expression has been obtained by Dimashko et al. \[13\] to address the phase separation issue for high-temperature superconductivity in the limit of 2zt/J$_{H}$S $>>$1 with U=0. Inoue et al. \[14\] also obtained similar expression in the DE limit and suggested that a spiral state may be more stable than the canted state in La$_{1-x}$R$_{x}$MnO$_{3}$, but they did not consider the effect of the Coulomb interaction on the ground state and the possibility of phase separation. Later we will show that the Coulomb correlation can not be neglected.
To explore the ground-state properties of lanthanum manganeses, we are only interested of the lower subband of (5). In the DE model, zt/J$_{H}$S is a small quantity and we can expand E$_{k}$ to the linear term of zt/J$_{H}$S. At zero temperature, the ground-state energy of the system with uniform doping concentration x is: $$E_{G} = NS[-g\mu_{B}Bcos(\theta)-4A^{'}S+2AScos(2\theta)]
+ \sum_{k\sigma}^{ k_{F}}
[ U<n_{\bar{\sigma}}>-J_{H}S-\epsilon_{k\sigma}cos(\theta) ] ~,$$ where $N$ is the total number of core spins. The summation of the mean occupation over spin is the carrier concentration, i.e., $\sum_{k \sigma}^{k_F} <n_{k\sigma}>$=x, the Fermi wave vector is $k_{F}$.
Minimizing the total energy with respect to $\theta$ gives rise to the canting angle, $$cos(\theta)=\frac{g\mu_{B}BS + 2zt \alpha}{8AS^{2}}~,~~~~
\alpha=\frac{2}{N} \sum_{k}^{k_{F}} \gamma(k) ~.$$ For small doping concentration, $x \ll 1$, $\alpha$ depends on doping concentration, we have $$k_F^3 = 3 \pi^2 x ~,$$ $$\alpha = x [ 1- (3\pi^{2}x)^{2/3}/10] ~.$$ in a three-dimensional isotropic lattice system. This result is slightly different from that of \[7\], due to the lattice effect. In the absence of the external magnetic field ($B=0$) for very small doping, $\alpha \approx x$, the critical hole density for the system evolving from canted antiferromagnet into ferromagnet is x$_{c}=4 A S^2 / zt$, this result is similar to that of Ref \[7\]. Both the present result (in the limit zt/$J_{H}S$ $<<1$) and that of \[13\] (in the limit zt/J$_{H}S$ $>>1$) have shown that the ground state is antiferromagnetic in the zero doping limit when there is no external magnetic field, so it is reasonable to expect that the ground state is always antiferromagnetic for all values of zt/J$_{H}S$ in pure lanthanum manganites.
Furthermore the present theory contains some more interesting results. First, the influence of the external magnetic field on the magnetic structure can be discussed for almost pure lanthanum manganites (x $approx$ 0), the effect of magnetic field is similar to that of doping, the cosine of canted angle linearly increases with magnetic field. At a certain critical value B$_{c}$ =8AS/g$\mu_{B}$, the external magnetic field exceeds the superexchange field, all spins tend to align paralleled, the ferromagnetic alignment of local spins are in favor of the motion of holes, so the system may exhibit large decrease in resistivity, however the critical field may be as high as hundreds of T, so it would not like that the metal-insulator transition induced by the external magnetic field causes the CMR effect. Second, in the Hatree-Fock approximation and expanding E$_{k\sigma}$ (see (5)) to the second order of (2zt/J$_{H}$S), one can find that the canted angle weakly depends on the Coulomb interaction U, so the consideration of the Coulomb correlation in the mean-field approximation does not change canted angle significantly. This is only attribute to the fact that treatment of the electronic correlation in the Hatree-Fock approximation is rather rough.
One conclusion of the above discussion is that manganites with uniform hole concentration is spin canted at low doping. However, Schiffer [*et al.*]{}’s report \[1\] on a low-doping phase diagram suggests ferromagnetic ordering. This may have two possible reasons: the first is that the oxygen content in La$_{1-x}$Ca$_{x}$MnO$_{3+\delta}$ is not exactly stoichiometric ($\delta \neq 0)$, so the ferromagnetic component arising from the DE interaction plays a role; the second is that phase separation might take place, holes aggregate into a ferromagnetic droplet, so the ferromagnetic ground-state emerges. In the following, we briefly discuss the possibility of phase separation in the DE model (2zt/J$_{H}$S $<<$1), as contrast to the usual [*s-f*]{} model (2zt/J$_{H}$S $>>$1). After the holes aggregate into droplets from the antiferromagnetic background, in the absence of external magnetic field, the energy densities e(x) in the hole-rich phase at hole density x is e(x): $$e(x) = S[-4A^{'}S+2AScos(2\theta)]
+ [\frac{Ux^{2}}{2}-xJ_{H}S- 2zt\alpha cos(\theta)] ~,$$ and leaving the hole-free antiferromagnetic background with energy density e(0), $e(0) =-2S^{2}[2A^{'}+A]$, here magnetic field B=0. Let $ n_{h} $ be the total number of hole, then the number of sites occupied by the hole-rich phase is n$_{h}$/x. N is the number of sites of the whole system. Then the total energy of the two-phase state is: $$E(x)=-2NS^{2}(2A^{'}+A)-2n_{h}J_{H}S+n_{h}[\frac{4AS^{2}cos^{2}(\theta)}{x}
+\frac{Ux}{2}- \frac{4zt\alpha cos(\theta)}{x} ]$$ For very low hole concentration, $\alpha \approx x$, one has: $$E(x) = \left\{ \begin{array}{ll}
{\it const}+n_{h}(\frac{U}{2}-\frac{(zt)^{2}}{AS^{2}}) x & x < x_{c} \\
{\it const}+n_{h}(\frac{4AS^{2}}{x}+\frac{Ux}{2}) & x \geq x_{c}
\end{array} \right.$$ One finds that the presence of a strong on-site Coulomb interaction may prevent phase separation, however, if U is smaller than a critical value U$_{c}$, $$U_{c} = \frac{(2zt)^{2}}{AS^{2}}$$ the two-phase energy has a minimum at density $$x=(\frac{8AS^{2}}{U})^{1/2} ~,$$ so the phase separation into ferromagnetic droplet takes place at sufficiently low density $x <x_{0}$. When $U>U_c$, the E(x) dependence is monotonous \[E’(x)>0\], so there is no phase separation at all. For typical parameters in $ La_{1-x}Ca_{x}MnO_{3}, 4AS^{2}=4.84 meV [15] $, and by the electronic structure calculation of the local density functional technique, we find that 2zt=0.5 eV, therefore $x_0 \approx 0.007 $, which is much smaller than the critical concentration $x_c (\approx 0.1)$. This may address the experimental observation in Ref. 1. Further experiment is expected.\
[**II. A Mean-Field Solution.**]{}
In the $La_{1-x}R_{x} MnO_{3}$ system, there exists Mn$^{+3}$ and Mn$^{+4}$ ions. Due to strong Hund’s coupling and Coulomb interactions \[16\], the Mn$^{+2}$ ions are excluded, i.e., double occupancy in the $e_{2g}$ orbital is prohibited. The hopping integral $t$ is far less than the on-site Coulomb interaction and the Hund’s coupling, so it is reasonable to take U as infinity to exclude the appearance of Mn$^{+2}$ in manganites, or the double occupation. In the limit of large Coulomb interaction, the constraint of no double occupancy at site $R_{i} $ can be enforced by introducing auxiliary fermions \[17\], f$_{i\sigma}$, and bosons, b$_{\sigma}$, where $f^{\dag}_{i \sigma}$ creates a slave fermion with spin $\sigma$ when site $R_{i}$ is occupied, while $b^{\dag}_{i}$ creates a boson (hole) at R$_{i}$ when it is unoccupied. Thus $d_{i\sigma}=f_{i\sigma}b^{\dag}_{i}$ and the model Hamiltonian can be rewritten as: $$H =\sum_{<ij>\sigma} t_{ij} f^{\dag}_{i \sigma}f_{j\sigma}b_{i}b^{\dag}_{j}
-J_{H} \sum_{i\mu\nu} {\bf S}_{i} \cdot f^{\dag}_{i \mu}
{\bf \sigma}_{\mu \nu} f_{i\nu}
+\sum_{<ij>}{\it A}_{ij} {\bf S}_{i} \cdot {\bf S}_{j}
+\sum_{i} \epsilon_{d} (\sum_{\sigma}f^{\dag}_{i \sigma}f_{i \sigma}+
b^{\dag}_{i}b_{i}-1) ~$$ where $\epsilon_{d}$ is the energy shift of the d-electron with respect to the original energy level, the other parameters are the same as in Eq. (1).
In the static (or saddle-point) approximation, the boson field is replaced by its mean value and assumed to be independent of R$_{i}$, $<b^{\dag}_{i}>=<b_{i}>=b^{1/2}$, and one can obtain the mean-field equations by taking derivatives with respect to $\epsilon_{d}$ and $b$: $$\sum_{\sigma} <f^{\dag}_{i \sigma}f_{i \sigma}> =1- b ~$$ $$\epsilon_{d} = - 2 t \sum_{\delta} <f^{\dag}_{i \sigma}f_{i+\delta \sigma}> ~$$ Physically, b gives rise to the mean carrier (hole) concentration on every site (see Eq.(13)). If the localization effect of the carriers is neglected, $b$ corresponds to the doping concentration, $x$. Since spin components are relevant to the carrier concentration and the spin-dependent energy should be included in the mean value of fermion propagator, the spin configuration and the carrier concentration must be determined self-consistently.
The mean values in Eqs.(13) and (14) can be obtained from the fermion propagators, $G_{\sigma}$(ij;$\omega)$: $$G_{\sigma}(ij;\omega) =\sum_{k}
1/[w-\epsilon_{d}-2ztb\gamma_{k}+\sigma J_{H} S^{z}_{Q}]
e^{{\bf k} \cdot ({\bf R}_{i}-{\bf R}_{j})}$$ where $S^{z}_{Q}$ denotes the $z$-component of the spin with wave vector ${\bf Q}$: ${\bf Q}=0$ corresponds to ferromagnetic order, ${\bf \pi}$ to antiferromagnetic order, and values between 0 and $\pi$ to canted structures. Then the self-consistent equations at zero temperature are: $$1-b=-\frac{1}{\pi N}
\sum_{k\sigma} \int^{\epsilon_{F}} d\omega {\it Im}
\frac{1}{\omega+i\eta-\epsilon_{d}-2ztb\gamma_{k}+\sigma J_{H} S^{z}_{Q}} ~$$ and: $$\epsilon_{d}= \frac{4zt}{\pi N} \sum_{k\sigma} \gamma(k)
\int^{\epsilon_{F}} d\omega {\it Im}
\frac{1}{\omega+i\eta-\epsilon_{d}+2ztb\gamma_{k}+\sigma J_{H} S^{z}_{Q}},$$ where $\epsilon_{F}$ is the Fermi energy. Accordingly, we can obtain the mean value of $\ < S^{z}_{Q} \ >$, the energy shift $\epsilon_{d}$ and the ground state energy E$_{g}$ for doping concentration, $b (=x)$, at zero temperature.
In the present section we are interested in the ferromagnetic metallic regime of La$_{1-x}$Ca$_{x}$ MnO$_{3}$ (0.2$<x<0.5$) system, where the $z$-component of the spin, ${\it S}^{z}$, is the same at all the sites and is independent of the wave vector ${\bf Q}$. In the ferromagnetic metallic regime, the carrier is completely spin-polarized due to the strong Hund’s coupling, the density of states of the fermion may take a simple form: $${\rho} (\epsilon) = \left\{ \begin{array}{ll}
1/2bD & |\epsilon-\epsilon_{d}+ J_{H}<S^{z}>| < bD
\\ 0 & |\epsilon-\epsilon_{d}+J_{H}<S^{z}>| >bD \end{array}
\right.$$ where $2bD$ is the bandwidth of fermion, the solutions of the self-consistent mean-field equations give rise to the energy shift, $\epsilon_{d}$, $$\epsilon_{d} = Db(1-b)=D(1-n^{f})n^{f}$$ and the local spin moment: $$<S^{z}> =(-\epsilon_{F}+2bD-3b^{2}D)/J_{H}$$ at zero temperature. An interesting result is that there is an optimized doping for the local spin moment, or the magnetization. From Eq.(20), one finds that the local spin will have a maximum at $b=1/3$. Since the magnetization $M$ is proportional to $<S^{z}>$, and as mentioned above, $b$ corresponds to the hole or doping concentration, so one could expect that the magnetization exhibits a maximum around the doping concentration of 1/3, which agrees with experimental observations in La$_{1-x}$Ca$_{x}$MnO system \[2,3\]. Furthermore, one could show by a simple analysis that the Curie temperature also reaches to its maximum around 1/3 doping, which is in agreement with experiments \[1,4\], and different from the theoretical results in Refs. 10,11.
In the preceding discussion, the electron localization character resulting from the disorder effect in doping is not taken into account, and if it is taken into account, we could expect that optimizing doping concentration for magnetization and Curie temperature may not be at x=1/3 precisely. It could be a little larger than 1/3. Therefore the complete consideration of the electron correlation is important to understand the ground state properties of CMR materials.
To summarize, external magnetic field has similar effect on the canted angle of the manganese spins as the doping concentration, the phase separation may take place in doped manganites. The mean-field magnetization and the Curie temperature reach maxima near 1/3 doping.\
[**Acknowledgement**]{} L.-J. Zou thanks the invitation of the International Centre of Theoretical Physics (ICTP) in Trieste, Italy. This work is partly supported by the Grant of NNSF of China and the Grant of Chinese Academy of Science, and by the Direct Grant for Research from the Research Grants Council (RGC) of the Hong Kong Government.
REFERENCES
1. P. Schiffer, A. P. Ramirez, W. Bao and S-W. Cheong, [*Phys. Rev. Lett.*]{} [**75**]{}, 3336 (1995).
2. G. H. Jonker and J. H. Van Sauten, [*Physica*]{}, [**16**]{}, 337 (1950); J. H. Van Sauten and G. H. Jonker, [*ibid*]{}, [**16**]{}, 599 (1950).
3. E. Q. Wollen and W. C. Koeller, [*Phys. Rev.*]{}, [**100**]{}, 545 (1955).
4. M. C. Martin, G. Shirane, Y. Erdoh, K. Hirota, Y. Moritomo and Y. Tokura, [*Phys. Rev.*]{} [**B53**]{}, 14285 (1996).
5. C. Zener, [*Phys. Rev.*]{}, [**81**]{}, 440 (1951); [**82**]{}, 403 (1951).
6. P. W. Anderson and H. Hasegawa, [*Phys. Rev.*]{}, [**100**]{}, 675 (1955).
7. P. G. De Gennes, [*Phys. Rev.*]{}, [**100**]{}, 564 (1955); [**118**]{}, 141 (1960).
8. K. Kubo and N. Ohata, [*J. Phys. Soc. Jpn.*]{} [**33**]{}, 21 (1972).
9. H. Yoshizawa, H. Kawano, Y. Tomioka and Y. Tokura, [*Phys. Rev.*]{} [**B52**]{}, 13145 (1995). H. Kawano, R. Kajimoto, M. Kubota and H. Yoshizawa, [*Phys. Rev.*]{} [**B53**]{}, 2202 (1996).
10. C. M. Varma, [*Phys. Rev.*]{} [**B54**]{} 7328 (1996).
11. D. Y. Xing and Sheng Li, [*unpublished*]{}
12. J. M. Coey, M. Viret, J. Ranno and K. Ounadjela, [*Phys. Rev. Lett.*]{}, [**75**]{}, 3910 (1995).
13. Y. A. Dimashko and A. L. Alistratov, [*Phys. Rev.*]{}, [**50**]{}(2),1162 (1996)
14. J. Inoue and S. Maekawa, [*Phys. Rev. Lett*]{} [**74**]{}, 3407 (1995).
15. K. Hirota, N. Kaneko, A.Nishizawa and Y. Endoh, [*J. Soc. Phys. Jap.*]{} [**65**]{}, 3736 (1996)
16. S. Satpathy, Z. S. Povovic and F. R. Vukajlovic, [*J. Appl. Phys.*]{} [**79**]{} 4555 (1996).
17. P. Coleman [*Phys. Rev.*]{}, [**B29**]{}, 3055 (1984)
|
---
abstract: 'We estimate the composite fermion effective mass for a general two particle potential $r^{-\alpha}$ using exact diagonalization for polarized electrons in the lowest Landau level on a sphere. Our data for the ground state energy at filling fraction $\nu=1/2$ as well as estimates of the excitation gap at $\nu=1/3,\,2/5$ and $3/7$ show that $m_{\text{eff}} \sim \alpha^{-1}$.'
address:
- 'Institut für Theoretische Physik, Universität Leipzig, Augustusplatz 10, D-04109 Leipzig,Germany'
- 'Department of Mathematical Sciences, South Road, Durham DH1 3LE, England'
author:
- 'Uwe Girlich[^1]'
- 'Meik Hellmund[^2][^3]'
title: Interaction dependence of composite fermion effective masses
---
The dynamics of interacting planar electrons in the lowest Landau level of a strong magnetic field show many interesting features at filling fractions $\nu<1$ experimentally observed as the fractional quantum Hall effect (FQHE). It emerged that the picture of composite fermions (CF)[@Jai89a; @LF91] moving in a reduced magnetic field is central to the understanding of the FQHE. The field theoretic formulation of this idea has received much attention (see e.g. [@SH95; @Khv95]), in particular after Halperin, Lee and Read[@HLR93] described the polarized $\nu=1/2$ state as a fermi liquid state of composite fermions. Since the CF picture explains gaps due to the electron–electron interaction as Landau level gaps of composite fermions, their effective mass has to be understood as a result of this interaction.
Numerical diagonalization of the interaction Hamiltonian for electrons on a sphere has a long history[@Hald83] as a testing ground for the understanding of the FQHE. Rezayi and Read[@RR94] have shown that even for the small number of electrons $N \approx 10$ accessible to exact diagonalization the pattern of the angular momenta of $\nu=1/2$ ground states follows Hund’s rule applied to composite fermions in a zero magnetic field. Later on, Morf and d’Ambrumenil[@MdA95] demonstrated that the ground state energy itself allows an interpretation in the CF language and they estimated the CF effective mass. This effective mass is also the relevant parameter[@HLR93] for the excitation gap of $\nu=p/(2p+1)$ FQHE states[@DM89]. The basic features of the FQHE as seen in finite size studies are to a high degree independent of the exact form of the two particle interaction potential $V(r)$. Most studies therefore used a simple $1/r$ potential. With the advent of a qualitative theory of the FQHE it now seems appropriate to study the dependence of the numerically obtained effective masses on the chosen potential.
The single particle wave functions on a sphere of radius $R$ pierced by $\Phi=2S$ flux quanta are monopole harmonics of angular momenta $j=S, S+1,\dots$ with energy $$\label{EE}
E=\frac{\hbar^2}{2m R^2} [j(j+1)-S^2].$$ We will use the ion disc radius $a=(\pi\,\text{density})^{-1/2}$ as basic length unit: $R=a\sqrt{S}$. It is related to the magnetic length by $a=2l_0\sqrt{S/N}$.
The quasipotential coefficients[@Hald83] for an interaction potential $V(r)= \frac{e^2}{\varepsilon a} \left(\frac{a^{}}{r}\right)^\alpha$ with chord distance $r$ are $$\begin{aligned}
V_J &=&(-1)^{2S+J}\frac{(2S+1)^2}{N^{\alpha/2}}
\frac{ \Gamma(1-\frac{\alpha}{2})}{\Gamma(\frac{\alpha}{2})}\nonumber\\
&&\times\sum_k (2k+1)\frac{ \Gamma(\frac{\alpha}{2}+k)}
{\Gamma(2+k-\frac{\alpha}{2})}
\left\{ {S\atop S}{S\atop S}{k\atop J}
\right\}_{\text{6j}}
\left({S\atop -S}{S\atop S}{k\atop 0} \right)^2_{\text{3j}}.\end{aligned}$$ Fig. \[FigQ\] gives an impression of the $\alpha$ dependence of the $V_J$. The lowest eigenvalues and eigenvectors of the Hamiltonian matrix have been computed by a conjugate gradient algorithm[@bunk].
We have calculated the ground state energy and angular momenta for $\nu=1/2$ systems with up to $N=13$ electrons. The composite fermions feel no magnetic field at this filling fraction and form a “CF-atom” with shells $j=0,1,2,\dots$ of degeneracy $2j+1$. The ground state angular momentum follows Hund’s rule applied to the CF-atom (e.g. $J=0$ for $N=n^2$ indicating $n$ closed shells) for $0.2\leq\alpha\leq1.99$. At $\alpha=0.1$ we find small derivations in two cases ($L=1$ instead of 3 for $N=6$ and $L=4$ instead of 6 for $N=12$) but these ground states are almost degenerate with states with the “right” angular momentum.
Morf and d’Ambrumenil[@MdA95] found that the ground state energy per particle of the $\nu=1/2$ system with Coulomb interaction can be interpreted, up to a correction linear in $1/N$, as kinetic energy $T(N,m^*)$ of the CF-atom with effective CF mass $m^*$. This energy is calculated by summing up the contributions of the individual particles given by eq. (\[EE\]) with $S=0$ and $m=m^*$. It can be written as sum of a part linear in $1/N$ (in units of $a$) and a part which vanishes for closed shells. This deviation of the energy of partially filled shells from linear behaviour is proportional to the effective mass parameter $C$ introduced in [@HLR93] $$\label{C}
\frac{\hbar^2}{m^* a} = \frac{C} {2}\frac{e^2}{\varepsilon}.$$ We find the same pattern of $N$-dependence of the ground state energies in the range $0.1 \leq\alpha\leq1.99$. Fig. \[Figfit\](a) shows that for a long-range potential $\alpha=0.1$ the ground state energy comes very close to the prediction of free composite fermions.
On the other side, composite fermions are not free. The interaction energy of particles in shells describes a similar pattern with relative minima for closed shells. In order to test the influence of CF interactions on this method of obtaining $m^*$ we assume that the composite fermions interact via the same potential $r^{-\alpha}$ as the electrons. The energy of closed shells as well as the inter–shell energy can be calculated analytically using the shell model formalism of nuclear theory[@Talmi]. The ground state energy of the outer partially filled shell is calculated numerically by exact diagonalization. The sum of these contributions is $V(N,\alpha)$, the ground state energy of $N$ interacting particles of infinite mass on a sphere without magnetic field. We then checked that the electron ground state energies can be fitted by $a_0+a_1/N+V(N,\alpha)+T(N,\bar m^*)$. This provides another value for the effective mass $\bar m^*$, calculated for [*interacting*]{} composite fermions. For $\alpha<1$ (see Fig. \[Figfit\](a)) this appears to be less convincing than the free CF ansatz suggesting that in this case the CF interaction is even weaker. For $\alpha>1$, however, Fig. \[Figfit\](b)) shows that the data can be interpreted by assuming a larger effective mass of the interacting composite fermions.
The resulting values for $C$ and $\bar C$, as shown in Fig. \[FigGapfit\], are (for $\alpha$ not too big) linear in $\alpha$: $\bar C= 0.164\alpha,\quad C = 0.195\alpha$.
The Coulomb system at filling fractions $\nu=p/(2p+1)$ has been studied extensively in the past, cf. e.g. [@KWJ96; @DM89]. Apart from the ground state at $L=0$ one finds a band of low-lying excitations with $L=1,2,\dots,\Phi^*+1$, the exciton or magnetoroton. ($\Phi^*$ is the reduced magnetic field of the CF picture.) In the CF picture the ground state corresponds to $p$ filled Landau Levels and the excitation gap in the limit $L\to\infty$ measures the distance to the ($p+1$)th level. Fig. \[FigExcit\] shows the $\nu=1/3$ exciton mode, getting flatter for long-range potentials but with a visible magnetoroton minimum at $Ll'_0/R\approx1.4$.
At $\nu=3/7$ the case $N=12$ is the only one accessible to numerical diagonalization. $(N=9, \Phi=16)$ for example is also a $\nu=1/2$ state (an effect called “aliasing” in[@DM89]) and since the reduced magnetic field is zero, this seems to be the preferable interpretation. This makes a systematic study of finite size effects impossible. Therefore we take the gap $\Delta E$ at $L=\Phi^*+1$ for the highest available electron number ($N=10$ for $\nu=1/3, 2/5;$ $N=12$ for $\nu=3/7$) as an estimate for the CF gap between the $p$ and ($p+1$)th Landau level. The gap energies for different $\nu$ fit to one curve if divided by $\Delta\nu=1/2-\nu$ (Fig. \[FigEGap\]). The CF picture expects a large $p$ behaviour $$\Delta E=\frac{C}{p} \frac{e^2}{\varepsilon a}.$$ With $\Delta\nu \to \frac{1}{4p}$ as $p\to \infty$ our fit gives $C = 0.35 \alpha$. We therefore find an enhancement of $m^*$ for all potentials by roughly a factor of $2$ as $\nu\to1/2$. d’Ambrumenil and Morf report[@DM89] that for Coulomb interaction a correction of the gap energy by the potential energy of two charges $q=1/(2p+1)$ at distance $2R$ allows a consistent extrapolation in $1/N$. Unfortunately, this does no longer work for systems with $\alpha \neq1$. Nevertheless, our data are consistent with the value $C\approx 0.31$ for $\alpha=1$ extracted from their data in [@HLR93]. However, the dependence of $E$ on $\Delta\nu$ appears to be linear for all values of $\alpha$ contrary to the analytical result $\Delta E\sim |\Delta\nu|^{\frac{2+\alpha}{3}}$ of [@Khv95].
MH is supported by Deutscher Akademischer Austauschdienst.
[10]{}
J. K. Jain, Phys. Rev. Lett. [**63**]{}, 199 (1989).
A. Lopez and E. Fradkin, Phys. Rev. B [**44**]{}, 5246 (1991).
A. Stern and B. I. Halperin, Phys. Rev. B [**52**]{}, 5890 (1995).
D. V. Khveshchenko, Phys. Rev. B [**49**]{}, 10514 (1994).
B. I. Halperin, P. A. Lee, and N. Read, Phys. Rev. B [**47**]{}, 7312 (1993).
F. D. M. Haldane, Phys. Rev. Lett. [**51**]{}, 605 (1983).
E. Rezayi and N. Read, Phys. Rev. Lett. [**72**]{}, 900 (1984).
R. Morf and N. d’Ambrumenil, Phys. Rev. Lett. [**74**]{}, 5116 (1995).
N. d’Ambrumenil and R. Morf, Phys. Rev. B [**40**]{}, 6108 (1989).
B. Bunk, private communication; T. Kalkreuter and H. Simma, Computer Physics Commun. [**93**]{}, 33 (1996).
I. Talmi, [*Simple Models of Complex Nuclei*]{} (Harwood Academic Publishers, Chur, 1993).
R. K. Kamilla, X. G. Wu, and J. K. Jain, Phys. Rev. B [**54**]{}, 4873 (1996).
[^1]: e-mail: [Uwe.Girlich@itp.uni-leipzig.de]{}
[^2]: e-mail: [Meik.Hellmund@durham.ac.uk]{}
[^3]: permanent address: Institut für Theoretische Physik, Augustusplatz, Leipzig
|
---
abstract: |
Let $G$ be a bounded simply-connected domain in the complex plane $\mathbb{C}$, whose boundary $\Gamma:=\partial G$ is a Jordan curve, and let $\{p_n\}_{n=0}^{\infty}$ denote the sequence of Bergman polynomials of $G$. This is defined as the unique sequence $$p_n(z) = \lambda_n z^n+\cdots, \quad \lambda_n>0,\quad n=0,1,2,\ldots,$$ of polynomials that are orthonormal with respect to the inner product $$\langle f,g\rangle := \int_G f(z) \overline{g(z)} dA(z),$$ where $dA$ stands for the area measure.
We establish the strong asymptotics for $p_n$ and $\lambda_n$, $n\in\mathbb{N}$, under the assumption that $\Gamma$ is piecewise analytic. This complements an investigation started in 1923 by T. Carleman, who derived the strong asymptotics for $\Gamma$ analytic, and carried over by P.K. Suetin in the 1960’s, who established them for smooth $\Gamma$. In order to do so, we use a new approach based on tools from quasiconformal mapping theory. The impact of the resulting theory is demonstrated in a number of applications, varying from coefficient estimates in the well-known class $\Sigma$ of univalent functions and a connection with operator theory, to the computation of capacities and a reconstruction algorithm from moments.
address: 'Department of Mathematics and Statistics, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus'
author:
- Nikos Stylianopoulos
title: Strong asymptotics for Bergman polynomials over domains with corners and applications
---
Introduction and main results {#section:intro}
=============================
Let $G$ be a bounded simply-connected domain in the complex plane $\mathbb{C}$, whose boundary $\Gamma:=\partial G$ is a Jordan curve and let $\{p_n\}_{n=0}^{\infty}$ denote the sequence of Bergman polynomials of $G$. This is defined as the unique sequence of polynomials $$\label{eq:pndef}
p_n(z) = \lambda_n z^n+ \cdots, \quad \lambda_n>0,\quad n=0,1,2,\ldots,$$ that are orthonormal with respect to the inner product $$\langle f,g\rangle_G := \int_G f(z) \overline{g(z)} dA(z),$$ where $dA$ stands for the area measure. We denote by $L_a^2(G)$ the Hilbert space of functions $f$ analytic in $G$, for which $$\|f\|_{L^2(G)}:=\langle f,f\rangle_G^{1/2}<\infty,$$ and recall that the sequence of polynomials $\{p_n\}_{n=0}^\infty$ forms a complete orthonormal system for $L_a^2(G)$.
Let $\Omega:=\overline{\mathbb{C}}\setminus\overline{G}$ denote the complement of $\overline{G}$ and let $\Phi$ denote the conformal map $\Omega\to\Delta:=\{w:|w|>1\}$, normalized so that near infinity $$\label{eq:Phi}
\Phi(z)=\gamma z+\gamma_0+\frac{\gamma_1}{z}+\frac{\gamma_2}{z^2}+\cdots,\quad \gamma>0.$$ Finally, let $\Psi:=\Phi^{-1}:\Delta\to\Omega$ denote the inverse conformal map. Then, $$\label{eq:Psi}
\Psi(w)=bw+b_0+\frac{b_1}{w}+\frac{b_2}{w^2}+\cdots, \quad |w|> 1,$$ where $b=1/\gamma$ gives the (*logarithmic*) *capacity* $\textup{cap}(\Gamma)$ of $\Gamma$.
As in the bounded case, we define the inner product $$\langle f,g\rangle_\Omega := \int_\Omega f(z) \overline{g(z)} dA(z),$$ on the unbounded domain $\Omega$ and denote by $L_a^2(\Omega)$ the Hilbert space of functions $f$ analytic in $\Omega$, for which $$\|f\|_{L^2(\Omega)}:=\langle f,f\rangle_\Omega^{1/2}<\infty.$$ We note that $L_a^2(G)$ and $L_a^2(\Omega)$ are known as the *Bergman spaces* of $G$ and $\Omega$, respectively. It is easy to see that for $f\in L_a^2(\Omega)$ to hold it is necessary $f(z)$ has around infinity a Laurent series expansion starting with $1/z^2$.
The main purpose of the paper is to establish the strong asymptotics of the leading coefficients $\{\lambda_n\}_{n\in\mathbb{N}}$ and the Bergman polynomials $\{p_n\}_{n\in\mathbb{N}}$, in $\Omega$, for non-smooth boundary $\Gamma$. We do this under the assumption that $\Gamma$ is *piecewise analytic without cusps*. This means that $\Gamma$ consists of a finite set of analytic arcs that meet at exterior angles $\omega\pi$, with $0<\omega<2$. Thus, we allow $\Gamma$ to have corners. In this sense, our results complement an investigation started by T. Carleman [@Ca23] in 1923, who derived the strong asymptotics under the assumption that $\Gamma$ is analytic, and was carried over by P.K. Suetin [@Su74] in the 1960’s, who verified them for smooth $\Gamma$.
The techniques employed in both [@Ca23] and [@Su74] are tied to the specific properties that characterize the mapping functions $\Phi$ and $\Psi$ in cases when $\Gamma$ is analytic, or smooth, and it turns out that they are not suitable for treating domains with corner. In order to overcome this we have developed an approach, which we believe to be novel. This approach involves, in particular, new techniques from the theory of quasiconformal mapping and a new sharp estimate, concerning the growth of a polynomial in terms of its $L^2$-norm.
Our main results are the following three theorems.
\[thm:finelambdan\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eqinthm:finelambdan}
{\frac{n+1}{\pi}\frac{\gamma^{2(n+1)}}{\lambda_n^2}=1-\alpha_n,}$$ where $$\label{eqinthm:finelambdanii}
0\le\alpha_n\le c_1(\Gamma)\,\frac{1}{n}.$$
\[thm:finepn\] Under the assumptions of Theorem \[thm:finelambdan\], for any $n\in\mathbb{N}$, it holds that $$\label{eqinthm:finepn}
{p_n(z)=\sqrt{\frac{n+1}{\pi}}\,\Phi^n(z)\Phi^\prime(z)
\left\{1+A_n(z)\right\}},\quad z\in\Omega,$$ where $$\begin{aligned}
\label{eqinthm:finepnii1}
|A_n(z)|\le \frac{c_2(\Gamma)}{\operatorname{dist}(z,\Gamma)\,|\Phi^\prime(z)|}\,\frac{1}{\sqrt{n}}
+c_3(\Gamma)\,\frac{1}{n} .\end{aligned}$$
Above and in the sequel we use $c(\Gamma)$, $c_1(\Gamma)$, $c_2(\Gamma)$, e.t.c., to denote non-negative constants that depend only on $\Gamma$. We also use $\operatorname{dist}(z,B)$ to denote the (Euclidian) distance of $z$ from a set $B$ and call the quantities $\alpha_n$ and $A_n(z)$, defined by (\[eqinthm:finelambdan\]) and (\[eqinthm:finepn\]), as the *strong asymptotic errors* associated with $\lambda_n$ and $p_n(z)$, respectively.
From (\[eqinthm:finepnii1\]) and the well-known distortion property of conformal mappings $$\label{eq:distortion}
\operatorname{dist}(\Phi(z),\partial\mathbb{D})\le 4\,\operatorname{dist}(z,\Gamma)\,|\Phi^\prime(z)|,\quad z\in\Omega;$$ see, e.g., [@ABbook p. 23], we arrive at another estimate for $A_n(z)$, which does not involve the derivative of $\Phi$, i.e., $$\begin{aligned}
\label{eqinthm:finepnii2}
|A_n(z)|\le\frac{c_4(\Gamma)}{|\Phi(z)|-1}\,\frac{1}{\sqrt{n}}
+c_3(\Gamma)\,\frac{1}{n}, \quad z\in\Omega.\end{aligned}$$
Our next result provides an interesting link between the Bergman polynomials and the problem of coefficient estimates in Univalent Functions Theory. This result is established under the assumption that $\Gamma$ belongs to a broader class of Jordan curves than the one appearing in Theorem \[thm:finelambdan\], namely the class of quasiconformal curves. We recall that a Jordan curve $\Gamma$ is *quasiconformal* if there exists a constant $M$ such that, $$\textup{diam}\Gamma(z,\zeta)\le M |z-\zeta|,\mbox{ for all }\, z,\zeta\in\Gamma,$$ where $\Gamma(z,\zeta)$ is the arc (of smaller diameter) of $\Gamma$ between $z$ and $\zeta$. In connection with the assumptions of Theorem \[thm:finelambdan\], we also recall that a piecewise analytic Jordan curve is quasiconformal if and only if has no cusps. The assumption that $\Gamma$ is quasiconformal ensures the existence of an associated $K$-quasiconformal reflection $y(z)$, for some $K\ge 1$, which is characterized by the properties (A1)–(A3) stated in Remark \[lem:prop-qc\] below. The existence of the quasiconformal reflection was established by Ahlfors in [@Ah63]. All our estimates that are derived under the assumption that $\Gamma$ is quasiconformal are given in terms of the constant $$\label{eq:refcoef}
k:=(K-1)/(K+1),$$ which in the sequel we refer to as the *reflection factor of $\Gamma$ (associated with $y$)*. We note that $0\le k<1$, with $k=0$ if $\Gamma$ is a circle.
In the next theorem we require, in addition, that $\Gamma$ is rectifiable. Note that there are examples of non-rectifiable quasiconformal curves; see, e.g., [@LV p. 104]. However, any quasiconformal curve has zero area.
Our result shows that the strong asymptotic error $\alpha_n$ cannot decay faster than $(n+1)|b_{n+1}|^2$, where $b_{n+1}$ is the coefficient of $1/w^{n+1}$ in the Laurent series expansion (\[eq:Psi\]) of $\Psi(w)$.
\[thm:alphange\] Assume that $\Gamma$ is quasiconformal and rectifiable. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:alphange}
\alpha_{n}\ge\,\frac{\pi\,(1-k^2)}{A(G)}\,(n+1)\,|b_{n+1}|^2,$$ where $A(G)$ denotes the area of $G$ and $k$ is the reflection factor of $\Gamma$.
As it was noted above, Theorem \[thm:alphange\] provides a link between the problems of estimating the error in the strong asymptotics for $\lambda_n$ and of estimating coefficients in the well-known class $\Sigma$, consisting of functions analytic and univalent in $\Delta\setminus\{\infty\}$ that have a Laurent series expansion of the form (\[eq:Psi\]) with $b=1$. This latter problem is one of the best-studied in Geometric Function Theory; see, e.g., [@Po75] and [@Du83].
In another application, the result of Theorem \[thm:alphange\] can be used to discuss the sharpness in the decay of order $O(1/n)$, predicted for the sequence $\{\alpha_n\}$ by Theorem \[thm:finelambdan\]. Ideally, in order to show by means of Theorem \[thm:alphange\] that the estimate (\[eqinthm:finelambdanii\]) is sharp, it would suffice to find a domain $G$, bounded by a piecewise analytic, and without cups, Jordan curve $\Gamma$, for which it would hold $|b_n|\ge c_4/n$. In view of the estimate $|b_n|\le c_5/n$, for $n\in\mathbb{N}$, however, which was obtained by Gaier in [@Ga99] for $\Gamma$ piecewise Dini-smooth, this seems already tricky, because if $\Gamma$ is piecewise analytic (even with cusps) then is also piecewise Dini-smooth. Moreover, in view of the estimate $|b_n|\le c_4/n^{1+\omega}$, $n\in\mathbb{N}$, with $0<\omega<2$, which is established in Section \[subsec:coef-esti\] below, the use of Theorem \[thm:alphange\] for proving this kind of sharpness is of no help.
Nevertheless, Theorem \[thm:alphange\] can be employed to show that the order $O(1/n)$ in (\[eqinthm:finelambdanii\]) is best possible in a different sense:
\[rem:alphan-low-esti\] For any $\epsilon>0$, there exists a domain $G$, which is bounded by a piecewise analytic Jordan curve $\Gamma$, such that for the associated strong asymptotic error $\alpha_n$ there holds $$\label{eq:alphan-low-esti}
\alpha_n \ge c_5(\Gamma)\frac{1}{n^{1+\epsilon}},$$ for some positive constant $c_5(\Gamma)$ and infinitely many $n\in\mathbb{N}$.
This will be shown in a forthcoming article with the help of a domain $G$ whose boundary consists of two symmetric, with respect to the imaginary axis, circular arcs that meet at $i$ and $-i$, forming exterior angles $\pi/N$, with $N\in\mathbb{N}\setminus\{1\}$. More precisely, in this case it can be shown that $$|b_{2n+1}|\asymp \frac{1}{n^{1+1/N}},\quad n\in\mathbb{N},$$ which, in view of Theorem \[thm:alphange\], implies (\[eq:alphan-low-esti\]).
In addition to the above, we present two examples and certain numerical evidence supporting the hypothesis that the order $O(1/n)$ in (\[eqinthm:finelambdanii\]) is, indeed, sharp.
The first example is based on a Jordan curve constructed by Clunie in [@Cl59], for which the sequence $\{n\,b_n\}_{n\in\mathbb{N}}$ is unbounded. More precisely, for $\epsilon=1/50$, there exists some subsequence $\mathcal{N}$ of $\mathbb{N}$, such that $$\label{eq:clunie}
n|b_n|> n^\epsilon,\quad n\in\mathcal{N}.$$ It was shown by Gaier in [@Ga99 § 4.2] that Clunie’s curve is, eventually, quasiconformal.
The second example is generated by the function $$\label{eq:Psi-hypom}
\Psi(w)=w+\frac{1}{(m-1)w^{m-1}},\quad |w|\ge 1,$$ For any $m\ge 3$, this function maps $\Delta$ conformally onto the exterior of a symmetric $m$-cusped hypocycloid $H_m$, which is a piecewise analytic Jordan curve with all exterior angles equal to $2\pi$, and thus not a quasiconformal curve. Nevertheless, for each $n\ge 2$, $H_{n+1}$ provides an example of a cusped Jordan curve, where $b_n=1/n$.
Regarding numerical evidence, we consider the case where $G$ is the unit half-disk and display in Table \[tab:an-decay\] a range of computed values of $\alpha_n$, for $n=51,\ldots,60$. These were obtained from the exact value $\gamma=1/\textup{cap}(\Gamma)=3\sqrt{3}/4$ and the computed values of $\lambda_n$, after constructing in finite high precision and in the way indicated in Section \[subsec:ArnoldiGS\] below, the Bergman polynomials up to degree $60$. Thus, we expect all the figures quoted in the table to be correct. The reported values of $\alpha_n$ indicate that the strong asymptotic error for the leading coefficient decays monotonically to zero. In view of the estimate (\[eqinthm:finelambdanii\]), we test the hypothesis ${\alpha_n\approx{1}/{n^s}}$. The reported values of $s$ in the table indicate clearly that $s=1$.
Exactly the same behaviour was observed in a number of different non-smooth cases, involving various angles and shapes. Based on such evidence, we have conjectured the strong asymptotics for non-smooth domains in [@NS06 pp. 520–521].
-------------- ---------------- -----------------------------------------
$ $ $n$ $\alpha_n$ $s$ ${\vphantom{\sum^{\sum^{\sum^N}}}}$
\*\[3pt\] 51 0.003263458678 -
52 0.003200769764 0.998887
53 0.003140444435 0.998899
54 0.003082351464 0.998911
55 0.003026369160 0.998923
56 0.002972384524 0.998934
57 0.002920292482 0.998946
58 0.002869952027 0.998957
59 0.002821401485 0.998968
60 0.002774426207 0.998979
-------------- ---------------- -----------------------------------------
: The rate of decay of $\alpha_n$ for the unit half-disk.[]{data-label="tab:an-decay"}
The first ever result regarding strong asymptotics for $\{\lambda_n\}_{n\in\mathbb{N}}$ and $\{p_n\}_{n\in\mathbb{N}}$ was derived by Carleman in [@Ca23], for domains bounded by analytic Jordan curves. In this case the conformal map $\Phi$ has an analytic and one-to-one continuation across $\Gamma$ inside $G$.
\[thm:carleman\] Let $L_R$ to denote the level curve $\{z:|\Phi(z)|=R\}$ and assume that $\rho<1 $ is the smallest number for which $\Phi$ is conformal in the exterior of $L_\rho$. Then, for any $n\in\mathbb{N}$, $$\label{eq:carlman1}
0\le \alpha_n\le c_6(\Gamma)\, \rho^{2n}$$ and $$\label{eq:carlman2}
|A_n(z)|\le c_7(\Gamma)\sqrt{n}\,\rho^n,\quad z\in\overline{\Omega}.$$
The next major step in removing the analyticity assumption on $\Gamma$ was taken by P.K. Suetin in the 1960’s. For his results, Suetin requires that the boundary curve $\Gamma$ belongs to the smoothness class $C(p,\alpha)$. This means that $\Gamma$ is defined by $z=g(s)$, where $s$ denotes arclength, with $g^{(p)}\in \textup{Lip}\,\alpha$, for some $p\in \mathbb{N}$ and $0<\alpha<1$. In this case both $\Phi$ and $\Psi$ are $p$ times continuously differentiable in $\overline{\Omega}\setminus\{\infty\}$ and $\overline{\Delta}\setminus\{\infty\}$, respectively, with $\Phi^{(p)}$ and $\Psi^{(p)}$ in $\textup{Lip}\,\alpha$. A typical result goes as follows:
\[thm:suetin\] Assume that $\Gamma\in C(p+1,\alpha)$, with $p+\alpha>1/2$. Then, for any $n\in\mathbb{N}$, $$\label{eq:suetin1}
0\le \alpha_n\le c_8(\Gamma)\,\frac{1}{n^{2(p+\alpha)}}$$ and $$\label{eq:suetin2}
|A_n(z)|\le c_9(\Gamma)\,\frac{\log n}{n^{p+\alpha}},\quad z\in\overline{\Omega}.$$
The results of Carleman and Suetin given above, in conjunction with Theorem \[thm:alphange\], yield immediately estimates for the decay of the coefficients $b_n$, depending on the degree of analyticity, or smoothness, of $\Gamma$. More precisely:
Under the assumptions of the theorem of Carleman it holds, for any $n\in\mathbb{N}$, that $$|b_n|\le c_{10}(\Gamma)\,\frac{\varrho^n}{\sqrt{n}}.$$ Under the assumptions of the theorem of Suetin, it holds, for any $n\in\mathbb{N}$, that $$|b_n|\le c_{11}(\Gamma)\,\frac{1}{n^{p+\alpha+1/2}}.$$
Strong asymptotics for $\lambda_n$ and $p_n$ were also derived by E.R. Johnston in his Ph.D. thesis [@Jo]. These asymptotics, however, were established under analytic assumptions on certain functions related with the conformal maps $\Phi$ and $\Psi$ (as compared to the geometric assumptions on $\Gamma$ in the theorems above) and they do not provide the order of decay of the associated errors. An account of Johnston’s results can be found in [@RW].
In addition, we cite the following representative works about strong asymptotics for complex orthogonal polynomials generated by measures supported on 2-dimensional subsets of $\mathbb{C}$: (a) Szegő’s book [@Sz Ch. XVI], for orthogonal polynomials with respect to the arclength measure (the so-called Szegő polynomials) on analytic Jordan curves; (b) Suetin’s paper [@Su66b], for weighted Szegő polynomials on smooth Jordan curves; (c) Widom’s paper [@Wi69] for weighted Szegő polynomials on systems of smooth Jordan curves and smooth Jordan arcs; (d) the recent paper [@GPSS] by Gustafsson, Putinar, Saff and the author, for Bergman polynomials on systems of smooth Jordan domains.
The above list is by no means complete. Nevertheless, we haven’t been able to trace in the literature a single result establishing strong asymptotics for orthogonal polynomials defined by measures supported on non-smooth domains, curves or arcs. In this connection, we note that the well-known approach that combines the Riemann-Hilbert reformulation of orthogonal polynomials of Fokas, Its and Kitaev [@FIK91]–[@FIK92], with the method of steepest descent, introduced by Deift and Zhou [@DZ], cannot be applied, at least in its present form, to derive strong asymptotics for Bergman polynomials associated with non-trivial domains. This is so, because this approach produces, invariably, orthogonal polynomials that satisfy a finite-term recurrence relation and this is not the case with the Bergman polynomials, as Theorem \[thm:ftrr\] below shows.
By contrast, strong asymptotics for orthogonal polynomials on the real line and the unit circle is a well-studied subject. From the vast bibliography available, we cite the two volumes of B. Simon [@SimBoI]–[@SimBoII], which contain a comprehensive treatment of the classical and the spectral theory of orthogonal polynomials on the unit circle, and the recent breakthrough paper of Lubinsky [@Lu09], on universality limits for kernel polynomials defined by positive Borel measures in $(-1,1)$.
The paper is organized as follows: In Section \[sec:faber\] we study the properties of associated Faber polynomials and derive a number of preliminary results under the assumptions: (a) $G$ is a bounded domain and (b) $\Gamma$ is a rectifiable Jordan curve. In addition, we state a number of results that needed in the proofs of the three main theorems, under increasing assumptions on $\Gamma$, namely: (c) $\Gamma$ is a quasiconformal curve and (d) $\Gamma$ is a piecewise analytic curve. The main result of Section \[sec:poly-est\] is a sharp estimate that relates the growth of a polynomial in $\Omega$ to its $L^2$-norm in $G$. This estimate is essential for establishing Theorem \[thm:finepn\]. Sections \[proofs-qc\] and \[proofs-pa\] are devoted to the proofs of the results stated in Section \[sec:faber\], regarding assumptions (c) and (d) respectively. Section \[proofs-main\] contain the proofs of the three main theorems of Section \[section:intro\]. Finally, in Section \[sec:appl\] we present, in briefly, a number of applications of the strong asymptotics and the associated theory.
Theorems \[thm:finelambdan\]–\[thm:finepn\], along with Corollaries \[cor:ratioln\]–\[cor:ratiopn\] and Theorems \[thn:zeros\], \[thm:ftrr\]–\[thm:DP\] have been presented, without proofs, in [@St-CR10].
Preliminary results {#sec:faber}
===================
The Faber polynomials $\{F_n\}_{n=0}^\infty$ of $G$ are defined as the polynomial part of the expansion of $\Phi^n(z)$, near infinity. Therefore, from (\[eq:Phi\]), $$\label{eq:PhinFn}
\Phi^n(z)=F_n(z)-E_n(z),\quad z\in\Omega,$$ where $$\label{eq:Fndef}
F_n(z)=\gamma^{n}z^n+\cdots,$$ is the Faber polynomial of degree $n$ and $$\label{eq:Endef}
E_n(z)=\frac{c^{(n)}_1}{z}+\frac{c^{(n)}_2}{z^2}+\frac{c^{(n)}_3}{z^3}+\cdots,$$ is the singular part of $\Phi^n(z)$. According to the asymptotics established by Carleman, the Bergman polynomial $p_n(z)$ is related to $\Phi^n(z)\Phi^\prime(z)$. Consequently, we consider the polynomial part of $\Phi^n(z)\Phi^\prime(z)$, and we denote the resulting series by $\{G_n\}_{n=0}^\infty$. $G_n(z)$ is the so-called Faber polynomial of the 2nd kind (of degree $n$) and satisfies $$\label{eq:PhinPhipGn}
\Phi^n(z)\Phi^\prime(z)=G_n(z)-H_n(z),\quad z\in\Omega,$$ with $$\label{eq:Gndef}
G_n(z)=\gamma^{n+1}z^n+\cdots,$$ and $$\label{eq:Hndef}
H_n(z)=\frac{a^{(n)}_2}{z^2}+\frac{a^{(n)}_3}{z^3}+\frac{a^{(n)}_4}{z^4}+\cdots,$$ valid in a neighborhood of infinity. It follows immediately from (\[eq:PhinFn\]) and (\[eq:PhinPhipGn\]) that $$\label{eq:GnFn+1}
G_n(z)=\frac{F_{n+1}^\prime(z)}{n+1}\quad\textup{and}\quad H_n(z)=\frac{E_{n+1}^\prime(z)}{n+1}.$$
\[lem:Hn-in-L2\] For any $n\in\mathbb{N}$ it holds that $H_n\in L_a^2(\Omega)$.
First we observe that the function $\Phi^n(z)\Phi^\prime(z)$ is square integrable in the bounded doubly-connected domain $D_R$, defined by the boundary curve $\Gamma$ and the level line $$L_R:=\{z:|\Phi(z)|=R\}=\{\Psi(w):|w|=R\},\quad R>1.$$ Indeed, by making the change of variables $w=\Phi(z)$, we have $$\int_{D_R}|\Phi^n(z)\Phi^\prime(z)|^2dA(z)=\int_{1<|w|<R}|w|^{2n}dA(w)=
\frac{\pi}{n+1}\{R^{2(n+1)}-1\}.$$ Therefore, $$\begin{aligned}
\label{eq:intHn1}
\left[\int_{D_R}|H_n(z)|^2dA(z)\right]^{1/2}
&\le\left[\int_{D_R}|\Phi^n(z)\Phi^\prime(z)|^2dA(z)\right]^{1/2}+
\left[\int_{D_R}|G_n(z)|^2dA(z)\right]^{1/2}\nonumber\\
&<\infty.\end{aligned}$$
Next, from the splitting (\[eq:PhinPhipGn\]) we see that $H_n(z)$ is analytic in $\Omega$ and has a double zero at infinity. Assume that (\[eq:Hndef\]) is valid for $|z|>R_1$. Then $\limsup_{k\to\infty}|a_k^{(n)}|^{1/k}=R_1$ and, hence, the estimate $$\label{eq:ank-esti}
|a_k^{(n)}|\le c\, R_2^k$$ holds for some $R_2>R_1$. Therefore, for any $R_3>1$, with $R_3>R_2$, we have: $$\begin{aligned}
\label{eq:intHn2}
\int_{|z|>R_3}|H_n(z)|^2dA(z)
&=\int_0^{2\pi}\int_{R_3}^\infty|H_n(r\textup{e}^{i\theta})|^2rdrd\theta=
\sum_{k=2}^\infty\frac{|a_k^{(n)}|^2}{(k-1)R_3^{2(k-1)}}\nonumber\\
&\le c \sum_{k=2}^\infty\frac{R_2^{2k}}{(k-1)R_3^{2(k-1)}}<\infty.\end{aligned}$$
Now, choose $R$ sufficiently large so that $D_R$ contains the circle $\{z:|z|=R_3\}$. Then, the result $\|H_n\|_{L^2(\Omega)}<\infty$ follows at once from the two estimates (\[eq:intHn1\]) and (\[eq:intHn2\]).
Remark \[rem:beta-eps\] and Theorem \[thm:epsn\] below show that a lot more can be said about the behaviour of $\|H_n\|_{L^2(\Omega)}$, under additional assumptions on $\Gamma$.
Results for rectifiable boundary {#sec2:rect}
--------------------------------
We assume now that the boundary $\Gamma$ is *rectifiable*. (Further assumption on $\Gamma$ will be imposed in various parts of the paper.) For rectifiable $\Gamma$, Cauchy’s integral formula yields the following representation for the Faber polynomial $F_n(z)$ and its corresponding singular part $E_n$(z): $$\label{eq:FnIntRep}
F_n(z)=\frac{1}{2\pi i}\int_\Gamma\frac{\Phi^n(\zeta)}{\zeta-z}\,d\zeta,\quad z\in G,$$ $$\label{eq:EnIntRep}
E_n(z)=\frac{1}{2\pi i}\int_\Gamma\frac{\Phi^n(\zeta)}{\zeta-z}\,d\zeta,\quad z\in \Omega.$$
It is well-known that the assumption on $\Gamma$ implies the facts that $\Phi^\prime$ belongs to the Smirnov class $E^1(\Omega)$, that both $\Phi^\prime$ and $\Psi^\prime$ have non-tangential limits almost everywhere on $\Gamma$ and $\partial\mathbb{D}$, respectively, and that they are integrable with respect to the arclength measure, i.e., $$\label{eq:Phi-prime-in-E1}
\int_\Gamma |\Phi^\prime(\zeta)|\,|d\zeta|<\infty,\quad \textup{and}\quad
\int_\mathbb{T} |\Psi^\prime(t)|\,|dt|<\infty;$$ see, e.g., [@Du70 Ch. 10], [@KhD82] and [@Po92 §6.3]. Hence $H_n\in E^1(\Omega)$ and therefore (\[eq:PhinPhipGn\]) yields the following two Cauchy representations: $$\label{eq:GnIntRep}
G_n(z)=\frac{1}{2\pi i}\int_\Gamma\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta,\quad z\in G,$$ and $$\label{eq:HnIntRep}
H_n(z)=\frac{1}{2\pi i}\int_\Gamma\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta,\quad z\in \Omega;$$ cf. [@Du70 Thm 10.4]. We note the following estimate, which is a simple consequence of (\[eq:Phi-prime-in-E1\]) and the representation (\[eq:HnIntRep\]): $$\label{eq:Hn-decay-rect}
|H_n(z)|\le\frac{c_1(\Gamma)}{\operatorname{dist}(z,\Gamma)},\quad z\in\Omega.$$
Next, we single out three identities, which we are going to use below.
\[lem:usefull-1\] Assume that the boundary $\Gamma$ is rectifiable. Then, for any $m,n\in\mathbb{N}$, there holds: $$\label{eq:PhimPhin}
\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z)\overline{\Phi^{n+1}(z)}dz=\delta_{m,n},$$ and $$\label{eq:PhiEmHn}
\int_\Gamma H_m(z)\overline{\Phi^{n+1}(z)}dz=0=
\int_\Gamma \Phi^m(z)\Phi^\prime(z)\overline{E_{n+1}(z)}dz,$$ where $\delta_{m,n}$ denotes Kronecker’s delta function.
Since $\Phi^\prime\in E^1(\Omega)$, the application of Cauchy’s theorem and the change of variables $w=\Phi(z)$ give, for some $R>1$, $$\begin{alignedat}{1}
\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z)\overline{\Phi^{n+1}(z)}dz
&=\frac{1}{2\pi i}\int_\Gamma \frac{\Phi^m(z)\Phi^\prime(z)}{\Phi^{n+1}(z)}dz\\
&=\frac{1}{2\pi i}\int_{L_R} \frac{\Phi^m(z)\Phi^\prime(z)}{\Phi^{n+1}(z)}dz
=\frac{1}{2\pi i}\int_{|w|=R} \frac{w^m\,dw}{w^{n+1}},
\end{alignedat}$$ and the result (\[eq:PhimPhin\]) follows from the residue theorem.
Next, using the splitting (\[eq:PhinPhipGn\]) in conjunction with (\[eq:PhimPhin\]) we obtain: $$\begin{alignedat}{1}
\frac{1}{2\pi i}\int_\Gamma H_m(z)\overline{\Phi^{n+1}(z)}dz
&=\frac{1}{2\pi i}\int_\Gamma G_{m}(z) \overline{\Phi^{n+1}(z)}dz\\
&-\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z) \overline{\Phi^{n+1}(z)}dz\\
&=\frac{1}{2\pi i}\int_\Gamma \frac{G_{m}(z)}{\Phi^{n+1}(z)}dz-\delta_{m,n}.
\end{alignedat}$$ The first identity in (\[eq:PhiEmHn\]) then follows from the residue theorem, because the value of the last integral is $\delta_{m,n}$; cf. (\[eq:Phi\]) and (\[eq:Gndef\]).
Finally, using the splitting (\[eq:PhinFn\]), and making the change of variables $w=\Phi(z)$, we obtain from (\[eq:PhimPhin\]) that $$\begin{alignedat}{1}
\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z)\overline{E_{n+1}}(z)\,dz
&=\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z) \overline{F_{n+1}(z)}dz\\&
-\frac{1}{2\pi i}\int_\Gamma \Phi^m(z)\Phi^\prime(z) \overline{\Phi^{n+1}(z)}dz\\
&=\frac{1}{2\pi i}\int_{|w|=1} w^m\overline{F_{n+1}(\Psi(w))}\,dw-\delta_{m,n}.
\end{alignedat}$$ The second identity in (\[eq:PhiEmHn\]) then follows by means of the residue theorem, which again implies that the value of the last integral is $\delta_{m,n}$. This is readily verified by noting that (\[eq:Psi\]) and (\[eq:Fndef\]) give, for $|w|=1$, $$\overline{F_{n+1}(\Psi(w))}=\overline{\gamma^{n+1}b^{n+1}w^{n+1}+\cdots}=\overline{w^{n+1}+\cdots}=1/w^{n+1}+\cdots.$$
With the help of $G_n(z)$ we define an auxiliary polynomial that plays a crucial role in the course our study: $$\label{eq:qndef}
q_{n-1}(z):=G_n(z)-\frac{\gamma^{n+1}}{\lambda_n}p_n(z),\quad n\in\mathbb{N}.$$ Observe that $q_{n-1}(z)$ has degree at most $n-1$, but it can be identical to zero, as the special case $G\equiv\mathbb{D}$ shows.
By noting the relation $$\label{eq:pnPhinPhip}
p_n(z)=\frac{\lambda_n}{\gamma^{n+1}}\,\Phi^n(z)\Phi^\prime(z)
\left\{1+\frac{H_n(z)}{\Phi^n(z)\Phi^\prime(z)}
-\frac{q_{n-1}(z)}{\Phi^n(z)\Phi^\prime(z)}\right\},$$ which follows at once from (\[eq:PhinPhipGn\]) and (\[eq:qndef\]) and is valid for any $z\in\Omega$ (since $\Phi^\prime(z)\ne 0$), it is not surprising that we formulate our results in terms of the following two sequences of nonnegative numbers: $$\label{eq:betandef}
\beta_n:=\frac{n+1}{\pi}\|q_{n-1}\|^2_{L^2(G)},\quad n\in\mathbb{N},$$ and $$\label{eq:epsndef}
\varepsilon_{n}:=\frac{n+1}{\pi}\|H_n\|^2_{L^2(\Omega)},\quad n\in\mathbb{N}.$$
The proof of Theorems \[thm:finelambdan\] and \[thm:finepn\] amounts, eventually, to establishing that the two sequences $\{\beta_n\}_{n\in\mathbb{N}}$ and $\{\varepsilon_n\}_{n\in\mathbb{N}}$ decay to zero like $O(1/n)$. To this end, a representation of $\beta_n$ and $\varepsilon_n$ as line integrals will be useful:
\[lem-usefull-2\] Assume that the boundary $\Gamma$ is rectifiable. Then, for any $n\in\mathbb{N}$, there holds: $$\label{eq:betanEn}
\beta_n=\frac{1}{2\pi i}\,\int_\Gamma q_{n-1}(z)\overline{E_{n+1}(z)}\,dz,$$ and $$\label{eq:epsnEn}
\varepsilon_{n}=-\frac{1}{2\pi i}\,\int_\Gamma H_n(z)\overline{E_{n+1}(z)}\,dz.$$
To derive (\[eq:betanEn\]) we use the orthogonality of $p_n$, (\[eq:GnFn+1\]) and Green’s formula to conclude, in steps, that $$\begin{alignedat}{1}
\|q_{n-1}\|^2_{L^2(G)}
&=\langle q_{n-1},G_n-\frac{\gamma^{n+1}}{\lambda_n}p_n\rangle=\langle q_{n-1},G_n\rangle \\
&=\int_G q_{n-1}(z)\overline{G_{n}(z)}\,dA(z)
=\frac{1}{n+1}\int_G q_{n-1}(z)\overline{F_{n+1}^\prime(z)}\,dA(z)\\
&=\frac{\pi}{n+1}\,\frac{1}{2\pi i}\int_\Gamma q_{n-1}(z)\overline{F_{n+1}(z)}\,dz.
\end{alignedat}$$ Hence, from (\[eq:PhinFn\]), $$\frac{n+1}{\pi}\|q_{n-1}\|^2_{L^2(G)}
=\frac{1}{2\pi i}\int_\Gamma q_{n-1}(z)\overline{E_{n+1}(z)}\,dz
+\frac{1}{2\pi i}\int_\Gamma q_{n-1}(z)\overline{\Phi^{n+1}(z)}\,dz,$$ and the result (\[eq:betanEn\]) follows, because the last integral vanishes, as it can be readily seen after replacing $\overline{\Phi^{n+1}(z)}$ by $1/\Phi^{n+1}(z)$ and applying the residue theorem.
Next, we recall that $E_{n+1}$ is analytic in $\Omega$, including $\infty$, and continuous on $\overline{\Omega}$, and that $H_n\in L_a^2(\Omega)\cap E^1(\Omega)$. The result (\[eq:epsnEn\]) follows from the application of Green’s formula in the unbounded domain $\Omega$ and (\[eq:GnFn+1\]). That is, $$\begin{alignedat}{1}
-\frac{1}{2\pi i}\,\int_\Gamma H_n(z)\overline{E_{n+1}(z)}\,dz
&=\frac{1}{\pi}\,\int_\Omega H_n(z)\overline{E^\prime_{n+1}(z)}\,dA(z)
=\frac{n+1}{\pi}\|H_n\|^2_{L^2(\Omega)}.
\end{alignedat}$$
It turns out that the strong asymptotic error $\alpha_n$ has a very simple connection with the quantities $\beta_n$ and $\varepsilon_{n}$, namely, $$\alpha_n=\beta_n+\varepsilon_{n}.$$ (This, actually, explains the presence of the fractional term $(n+1)/\pi$ in the definition of $\beta_n$ and $\varepsilon_{n}$ above.)
\[lem:alphabetaeps\] Assume that the boundary $\Gamma$ is rectifiable. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:alphabetaeps}
\frac{n+1}{\pi}\frac{\gamma^{2(n+1)}}{\lambda_n^2}=1-\left(\beta_n+\varepsilon_{n}\right).$$
Green’s formula, in conjunction with (\[eq:GnFn+1\]), yields: $$\|G_n\|_{L^2(G)}^2=\frac{1}{n+1}\,\int_G G_{n}(z)\overline{F^\prime_{n+1}(z)}\,dA(z)
=\frac{\pi}{n+1}\frac{1}{2\pi i}\,\int_\Gamma G_{n}(z)\overline{F_{n+1}(z)}\,dz.$$ Next, replace in the last integral $G_n(z)$ and $F_{n+1}(z)$ by their counterparts given in the splittings (\[eq:PhinFn\]) and (\[eq:Endef\]), respectively, to obtain: $$\begin{alignedat}{1}
\int_\Gamma G_{n}(z)\overline{F_{n+1}(z)}\,dz
&=\int_\Gamma \Phi^n(z)\Phi^\prime(z)\overline{\Phi^{n+1}(z)}dz
+\int_\Gamma \Phi^n(z)\Phi^\prime(z)\overline{E_{n+1}(z)}dz\\
&+\int_\Gamma H_n(z)\overline{\Phi^{n+1}(z)}dz
+\int_\Gamma H_n(z)\overline{E_{n+1}(z)}dz.
\end{alignedat}$$ It therefore follows from Lemma \[lem:usefull-1\] that $$\|G_n\|_{L^2(G)}^2=\frac{\pi}{n+1}\left[1+\frac{1}{2\pi i}\,\int_\Gamma H_{n}(z)\overline{E_{n+1}(z)}\,dz\right],$$ which, in view of Lemma \[lem-usefull-2\], yields the relation $$\label{eq:Gnepsn}
\|G_n\|_{L^2(G)}^2=\frac{\pi}{n+1}\,\left(1-\varepsilon_n\right),\quad n\in\mathbb{N}.$$
At the other hand, since $ p_n\bot\,q_{n-1}$, we have from Pythagoras’ theorem that $$\begin{aligned}
\label{eq:pytha}
\|G_n\|^2_{L^2(G)}=\|\frac{\gamma^{n+1}}{\lambda_n}\,p_n+q_{n-1}\|^2_{L^2(G)}
=\frac{\gamma^{2(n+1)}}{\lambda_n^2}+\|q_{n-1}\|^2_{L^2(G)},\end{aligned}$$ and (\[eq:alphabetaeps\]) follows by comparing (\[eq:Gnepsn\]) with (\[eq:pytha\]) and using the definition of $\beta_n$ in (\[eq:betandef\]).
\[rem:beta-eps\] It follows immediately from (\[eq:alphabetaeps\]) and (\[eq:Gnepsn\]) that $$\label{eq:beta-eps}
0\le\beta_n+\varepsilon_n <1, \quad 0\le\varepsilon_n <1\quad and\quad 0\le\beta_n<1.$$ In particular, these inequalities lead to the following three estimates $$\label{eq:|Gn|-rect}
\|G_n\|_{L^2(G)}\le\sqrt{\frac{\pi}{n+1}},\quad n\in\mathbb{N},$$ and $$\label{eq:|qn|-rect}
\|q_{n-1}\|_{L^2(G)}<\sqrt{\frac{\pi}{n+1}},\quad
\|H_n\|_{L^2(\Omega)}<\sqrt{\frac{\pi}{n+1}},\quad n\in\mathbb{N},$$ provided that $\Gamma$ is rectifiable. The inequality in (\[eq:|Gn|-rect\]) is sharp, as the case $G\equiv\mathbb{D}$ shows. Furthermore, Lemma \[lem-usefull-2\] implies that $\beta_n$ and $\varepsilon_n$ vanish simultaneously if $G$ is a disk.
Results for quasiconformal boundary {#sec2:qc}
-----------------------------------
For the next three results we need additional assumptions on $\Gamma$. Their respective proofs are given in Section \[proofs-qc\]. The first of them is essential in the proof of Theorem \[thm:betan\] below, and is of independent interest, in the sense it provides an estimate for the integral on $\Gamma$ of the product of two functions (one of them defined on $\overline{G}$ and the other on $\overline{\Omega}$) in terms of associated $L^2$-norms in $G$ and $\Omega$.
\[lem:useful-1\] Assume that $\Gamma$ is quasiconformal and rectifiable. Then, for any $f$ analytic in $G$, continuous on $\overline{G}$ and $g$ analytic in $\Omega$, continuous on $\overline{\Omega}$, with $g^\prime\in L^2(\Omega)$, there holds that $$\label{eq:useful-1}
\left|\frac{1}{2i}\int_\Gamma f(z)\overline{g(z)}dz \right|\le
\frac{k}{\sqrt{1-k^2}}\,\|f\|_{L^2(G)}\|g^\prime\|_{L^2(\Omega)},$$ where $k$ is the reflection factor of $\Gamma$ defined in (\[eq:refcoef\]).
It is readily verified that in the case when $\Gamma$ is a circle then both sides of (\[eq:useful-1\]) vanish.
The second result shows that the sequence $\{\beta_n\}$ is dominated by the sequence $\{\varepsilon_n\}$. Note, in particular, that $\beta_n$, $\varepsilon_n$ and $k$ vanish simultaneously if $\Gamma$ is a circle.
\[thm:betan\] Assume that $\Gamma$ is quasiconformal and rectifiable. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:betan}
0\le\beta_n\le \frac{k^2}{1-k^2}\,\,\varepsilon_{n},$$ where $k$ is the reflection factor of $\Gamma$.
The third result relates the decay of $\{\varepsilon_n\}$ to that of the coefficients of the exterior conformal map $\Psi$ and is essential in the proof of Theorem \[thm:alphange\].
\[lem:epsnge\] Assume that $\Gamma$ is quasiconformal. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:epsge}
\varepsilon_{n}\ge\,\frac{\pi\,(1-k^2)}{A(G)}\,(n+1)\,|b_{n+1}|^2,$$ where $A(G)$ denotes the area of $G$ and $k$ is the reflection factor of $\Gamma$.
Results for piecewise analytic boundary {#sec2:pw-analytic}
---------------------------------------
The next two theorems are established for $\Gamma$ *piecewise analytic without cusps*. This means that $\Gamma$ consists of a finite number of analytic arcs, say $N$, that meet at corner points $z_j$, $j=1,\ldots,N$, where they form exterior angles $\omega_j\pi$, with $0<\omega_j<2$. The proofs of these theorems are given in Section \[proofs-pa\].
The relation (\[eq:pnPhinPhip\]) reveals that in order to derive the strong asymptotics for $p_n(z)$ in $\Omega$, we need suitable estimates for $q_{n-1}(z)$ and $H_n(z)$ there. For $q_{n-1}(z)$ this is provided by Corollary \[lem:PolyLemma\] below. Regarding $H_n(z)$, we can use the estimate (\[eq:Hn-decay-rect\]), which is valid for $\Gamma$ rectifiable. However, under the current assumption on $\Gamma$ more can be obtained.
\[thm:HnOmgae\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:HnOmega}
|H_n(z)|\le\frac{c_2(\Gamma)}{\operatorname{dist}(z,\Gamma)}\,\frac{1}{n},\quad z\in\Omega,$$ where $c_2(\Gamma)$ depends on $\Gamma$ only.
Regarding the $L^2$-norm of $H_n$ we have the following estimate; cf. (\[eq:|qn|-rect\]).
\[thm:epsn\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:epsn}
\|H_n\|_{L^2(\Omega)} \le c_3(\Gamma)\,\frac{1}{n},$$ where $c_3(\Gamma)$ depends on $\Gamma$ only.
It is interesting to note an uniformity aspect in both the estimates (\[eq:HnOmega\]) and (\[eq:epsn\]), in the sense that the geometry of $\Gamma$, as it is measured by the values of $\omega_j\pi$, does not influence the way that $H_n(z)$ and $\|H_n\|_{L^2(\Omega)}$ tend to zero. This is somewhat surprising, when compared with similar results in Approximation Theory for domains with corners, and it can be attributed to the fact that the effect of $\omega_j$’s cancels out" in the representation (\[eq:HnIntRep\]) of $H_n(z)$, see (\[eq:Iji\]) and Remark \[rem:deltaj\] below.
We conclude this section with a simple consequence of Theorems \[thm:betan\] and \[thm:epsn\].
\[cor:qn-1L2\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, there holds that $$\label{eq:en-decay}
0\le\varepsilon_n\le c_4(\Gamma)\,\frac{1}{n},$$ and $$\label{eq:qn-1L2}
\|q_{n-1}\|_{L^2(G)}\le c_5(\Gamma)\,\frac{1}{n},$$ where $c_4(\Gamma)$ and $c_5(\Gamma)$ depend on $\Gamma$ only.
A comparison between (\[eq:qn-1L2\]) and (\[eq:|qn|-rect\]) reveals the gain in the rate of decay of $\|q_{n-1}\|_{L^2(G)}$ under the additional assumption the $\Gamma$.
A Polynomial lemma {#sec:poly-est}
==================
In the proof of Theorem \[thm:finepn\] we require an estimate for the growth of the polynomial $q_{n-1}(z)$ in $\Omega$, in terms of its $L^2$-norm in $G$. This is the purpose of the next lemma, which is of independent interest. Its own proof is given in Section \[sec:proof-lem:PolyLemma\] below. We use $\mathbb{P}_n$ to denote the space of the polynomials of degree up to $n$.
\[lem:PolyLemma\] Assume that $\Gamma$ is quasiconformal and rectifiable. Then, for any $P\in\mathbb{P}_n$, it holds that $$\label{eq:PolyLemma}
|P(z)|\le\frac{1}{\operatorname{dist}(z,\Gamma)\sqrt{1-k^2}}\,\,\sqrt{\frac{n+1}{\pi}}\,
\|P\|_{L^2(G)}\,|\Phi(z)|^{n+1},\quad z\in\Omega,$$ where $k$ is the reflection factor of $\Gamma$.
Regarding sharpness of the inequality (\[eq:PolyLemma\]), we note that the order $1/2$ of $n$ cannot be improved in general, as the the choice $P\equiv p_n$ and the strong asymptotics for smooth $\Gamma$ of Section \[section:intro\] show. Furthermore, the constant term is asymptotically optimal for $z\to\infty$, as the choice $P(z)=z^n$, with $G=\mathbb{D}$ (hence $k=0$) shows.
Lemma \[eq:PolyLemma\] should be compared with the following well-known result, which gives the growth of a polynomial in terms of its uniform norm on $\overline{G}$. Hereafter we use $\|\cdot\|_K$ to denote the uniform norm on the set $K$.
\[lem:B-W\] For any $P\in\mathbb{P}_n$, it holds that $$\label{eq:B-W}
|P(z)|\le\|P\|_{\overline{G}}\,\,|\Phi(z)|^{n},\quad z\in\Omega.$$
We note that the inequality (\[eq:B-W\]) is valid under more general assumption for $\overline{G}$; see, e.g., [@ST p. 153]. We also note the following norm-comparison result, which was quoted by Suetin in [@Su74 p. 38], under the assumption that $\Gamma$ is smooth: $$\label{eq:Suetin-ineq}
\|P\|_{\overline{G}}\le c(\Gamma)\,n\,\|P\|_{L^2(G)},\quad P\in\mathbb{P}_n.$$
To underline the importance of Lemma \[lem:PolyLemma\] for our work here, we observe that the combination of (\[eq:B-W\]) with (\[eq:Suetin-ineq\]) gives the estimate $$|P(z)|\le c(\Gamma)\,n\,\|P\|_{L^2(G)}|\Phi(z)|^{n},\quad z\in\Omega,$$ and this for $P\equiv q_{n-1}$, together with Corollary \[cor:qn-1L2\], yields $$\label{eq:unfor}
|q_{n-1}(z)|\le c(\Gamma)\,|\Phi(z)|^{n},\quad z\in\Omega,$$ provided $\Gamma$ is smooth. Unfortunately, (\[eq:unfor\]) is not adequate for delivering the strong asymptotics for $p_n(z)$, even for smooth $\Gamma$; see the proof of Theorem \[thm:finepn\] in Section \[proofs-main\].
At the other hand, the combination of Lemma \[eq:PolyLemma\] with Corollary \[cor:qn-1L2\] yields the following finer estimate, which suffices to convey that $A_n(z)=O(1/\sqrt{n})$ in (\[eqinthm:finepn\]):
\[cor:qn-in-Omega\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:qn-in-Omega}
|q_{n-1}(z)|\le\frac{c_1(\Gamma)}{\operatorname{dist}(z,\Gamma)}\,\frac{1}{\sqrt{n}}\,|\Phi(z)|^{n},
\quad z\in\Omega.$$ where $c_1(\Gamma)$ depends on $\Gamma$ only.
Proofs for quasiconformal boundary {#proofs-qc}
==================================
Assume now that $\Gamma$ is a quasiconformal curve. Our arguments in this section are based on the use of a $K$-*quasiconformal reflection* $y:\overline{\mathbb{C}}\to\overline{\mathbb{C}}$ defined, for some $K\ge 1$, by $\Gamma$ and a fixed point $a$ in $G$. Below, we collect together some well-known properties of $y(z)$ which are important for our work here and we refer to the four monographs [@Ah66], [@LV], [@ABD] and [@AstalaIwaniecMartin], for a concise account of results in Quasiconformal Mapping Theory; see also [@Be77 §6].
\[lem:prop-qc\] With the above notations it holds that:
1. $\overline{y}$ is a $K$-quasiconformal mapping $\overline{\mathbb{C}}\to\overline{\mathbb{C}}$;
2. $y(G)=\Omega$, $y(\Omega)=G$, with $y(a)=\infty$ and $y(\infty)=a$;
3. $y(z)=z$, for every $z\in\Gamma$ and $y(y(z))=z$, for all $z\in\mathbb{C}$.
For a function $f:\overline{\mathbb{C}}\to\overline{\mathbb{C}}$ we use the notation $f_{z}$ and $f_{\overline{z}}$ to denote its formal complex derivatives $$f_z:=\frac{\partial f}{\partial z}=
\frac{1}{2}\left(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial x}\right)
\quad\text{and}\quad
f_{\overline{z}}:=\frac{\partial f}{\partial \overline{z}}=
\frac{1}{2}\left(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial x}\right).$$ We recall that $f_z=f^\prime$ and $f_{\overline{z}}=0$, whenever $f$ is analytic, and the two identities $$\label{eq:dbar-rule}
\overline{f_z}=\overline{f}_{\overline{z}}\quad\text{and}\quad
\overline{f_{\overline{z}}}=\overline{f}_z.$$ We also recall the chain rule for formal derivatives that is, if $\zeta=g(z)$, then $$\label{eq:chain-rule-dbar}
(f\circ g)_{\overline{z}}=f_{\zeta}(g(z))\,g_{\overline{z}}(z)+f_{\overline{\zeta}}(g(z))\,\overline{g_z(z)}.$$
The property (A1) implies that $y$ is a sense-reversing homeomorphism of $\overline{\mathbb{C}}$ onto $\overline{\mathbb{C}}$ satisfying, almost everywhere in $\mathbb{C}$, $$\label{eq:yzk}
\left|\frac{y_z}{y_{\overline{z}}}\right|=\left|\frac{\overline{y}_{\overline{z}}}{\overline{y}_{{z}}}\right|
\le k:=\frac{K-1}{K+1}<1.$$ It further implies that, $y$ belongs to the Sobolev space $W^{1,2}_{loc}(\mathbb{C})$. Recall that we refer to $k$ as the reflection factor of $\Gamma$ associated with $y$.
Let $$J(y(z)):=|y_z|^2-|y_{\overline{z}}|^2$$ denote the Jacobian of the transformation $y:\overline{\mathbb{C}}\to\overline{\mathbb{C}}$, and note that $J(y(z))<0$, because $y(z)$ is sense-reversing. It follows easily from (\[eq:yzk\]) $$\label{eq:yzJ}
|y_{\overline{z}}|^2\le\frac{-1}{1-k^2}\,J(y(z))\quad\textup{and}\quad
|y_z|^2\le\frac{-k^2}{1-k^2}\,J(y(z)),$$ almost everywhere in $\mathbb{C}$. Thus, the change of variables $\zeta=y(z)$ and the property (A2) yield immediately the two estimates $$\label{eq:intyzJ}
\int_\Omega|y_{\overline{z}}|^2dA(z)\le\frac{1}{1-k^2}A(G)\quad\textup{and}\quad
\int_\Omega|y_z|^2dA(z)\le\frac{k^2}{1-k^2}A(G),$$ where $A(G)$ stands for the area of $G$.
Proof of Lemma \[lem:useful-1\] {#sec:proof-lem:useful-1}
-------------------------------
Since the function $g(z)$ is analytic in $\Omega$, it follows from (\[eq:dbar-rule\]) and the chain rule (\[eq:chain-rule-dbar\]) that $$\left[(\overline{g}\circ y)(z)\right]_{\overline{z}}=
\overline{g^\prime(y(z))}\,\overline{y_z},\quad z\in G.$$ Hence, it is easy to verify that the function $\left[(\overline{g}\circ y)(z)\right]_{\overline{z}}$ is square integrable in $G$. This is a consequence of the assumption $g^\prime\in L^2(\Omega)$ and the second inequality in (\[eq:yzJ\]). Indeed, using the change of variables $\zeta=y(z)$ we have: $$\begin{aligned}
\label{eq:En-in-W}
\int_G\left|\left[(\overline{g}\circ y)(z)\right]_{\overline{z}}\right|^2dA(z)
=&\int_G\left|g^\prime(y(z))\right|^2\left|y_z\right|^2dA(z)\nonumber \\
\le& \frac{-k^2}{1-k^2}\,\int_G \left|{g^\prime}(y(z))\right|^2 J(y(z))\,dA(z)\nonumber\\
=&\frac{k^2}{1-k^2}\,\int_\Omega \left|{g^\prime}(\zeta)\right|^2 \,dA(\zeta).\end{aligned}$$
Next, we set $$\eta_n:=\frac{1}{2i}\int_\Gamma f(z)\overline{g(z)}dz$$ and observe that, since $y(z)=z$, for $z\in \Gamma$, $\eta_n$ can be written as $$\eta_n
=\frac{1}{2i}\,\int_\Gamma f(z)\,\overline{g(y(z))}\,dz.$$
Finally, we note that the function $g(y(z))$ defines a quasiconformal extension of $g(z)$ into $G$, which is continuous on $\overline{G}$. Therefore, from the assumptions on $f$, $g$ and $\Gamma$ we conclude by means of Green’s formula that $$\begin{alignedat}{1}
\eta_n
=\int_G \left[f(z)\,(\overline{g}\circ y)(z)\right]_{\overline{z}}\,dA(z)
=\int_G f(z)\,\left[(\overline{g}\circ y)(z)\right]_{\overline{z}}\,dA(z),
\end{alignedat}$$ and the estimate (\[eq:useful-1\]) then follows by applying the Cauchy-Schwarz inequality to the last integral and using (\[eq:En-in-W\]).
Proof of Theorem \[thm:betan\] {#sec:proof-thm:betan}
------------------------------
In view of Lemma \[lem:Hn-in-L2\] and (\[eq:GnFn+1\]) we note that $E_{n+1}^\prime\in L^2(\Omega)$ and apply the result of Lemma \[lem:useful-1\] with $f\equiv q_{n-1}$ and $g\equiv E_{n+1}$ to the expression of $\beta_n$ given by (\[eq:betanEn\]) to we obtain: $$\beta_n\le \frac{k}{\sqrt{1-k^2}}\,\frac{1}{\pi}\,\|q_{n-1}\|_{L^2(G)}\|E^\prime_{n+1}\|_{L^2(\Omega)}.$$ Therefore, using the definition of $\beta_n$ and $\varepsilon_n$ in (\[eq:betandef\]) and (\[eq:epsndef\]), we conclude that $$\begin{alignedat}{1}
\beta_n^2
&\le \frac{k^2}{1-k^2}\,\frac{(n+1)^2}{\pi^2}\,\|q_{n-1}\|^2_{L^2(G)}\|H_{n}\|^2_{L^2(\Omega)}\\
&=\frac{k^2}{1-k^2}\,\beta_n \,\varepsilon_n,
\end{alignedat}$$ which yields at once the required estimate (\[eq:betan\]).
Proof of Theorem \[lem:epsnge\] {#sec:proof-lem:epsnge}
-------------------------------
Assume that $R>1$ is large enough so that the expansion (\[eq:Endef\]) is valid for all $z\in L_R$. Then, from the residue theorem and the splitting (\[eq:PhinFn\]) we have $$\label{eq:c_1-1}
c_1^{(n+1)}=\frac{1}{2\pi i}\,\int_{L_R}E_{n+1}(z)dz=-\frac{1}{2\pi i}\,\int_{L_R}\Phi^{n+1}(z)dz.$$
Next, by differentiating the expansion (\[eq:Psi\]) of $\Psi(w)$ and applying again the residue theorem we see that $$\label{eq:c_1-2}
-(n+1)b_{n+1}=\frac{1}{2\pi i}\,\int_{|w|=R}w^{n+1}\Psi^\prime(w)dw=\frac{1}{2\pi i}\,\int_{L_R}\Phi^{n+1}(z)dz.$$ Therefore, for any $n\in\mathbb{N}$, $$c_1^{(n+1)}=(n+1)b_{n+1}.$$ This, in view of (\[eq:Endef\]) and (\[eq:GnFn+1\]) shows that $a_2^{(n)}=-b_{n+1}$, where $a_2^{(n)}$ is the coefficient of $1/z^2$ in the expansion (\[eq:Hndef\]) of $H_n(z)$. Hence, another application of the residue theorem yields that $$-b_{n+1}=\frac{1}{2\pi i}\,\int_{L_R}H_{n}(z)zdz=\frac{1}{2\pi i}\,\int_{\Gamma}H_{n}(z)zdz.$$
Furthermore, by using the fact that $y(z)=z$, for $z\in\Gamma$, and the properties of $H_n(z)$ and $y(z)$ in $\Omega$, we obtain with the help of Green’s formula in the unbounded domain $\Omega$: $$\label{eq:bn+1Hn}
b_{n+1}=-\frac{1}{2\pi i}\,\int_{\Gamma}H_{n}(z)y(z)dz=
\frac{1}{\pi}\int_\Omega H_n(z) y_{\overline{z}}\,dA(z).$$ The last integral can be estimated by means of the Cauchy-Schwarz inequality and the first inequality in (\[eq:intyzJ\]). Indeed, $$\begin{alignedat}{1}
\left|\int_\Omega H_n(z) y_{\overline{z}}\,dA(z)\right|
&\le\|H_n\|_{L_2(\Omega)} \left[\frac{1}{1-k^2}A(G)\right]^{1/2},
\end{alignedat}$$ and the required result emerges from (\[eq:bn+1Hn\]) and the definition of $\varepsilon_n$.
Proof of Lemma \[lem:PolyLemma\] {#sec:proof-lem:PolyLemma}
--------------------------------
Let $P\in\mathbb{P}_n$ and fix $z\in\Omega$. Then, the function $P(z)/\Phi^{n+1}(z)$ is analytic in $\Omega$, continuous on $\Gamma$ and vanishes at $\infty$. Hence, from Cauchy’s formula and the property $y(\zeta)=\zeta$, for $\zeta\in\Gamma$, we have $$\frac{P(z)}{\Phi^{n+1}(z)}
=-\frac{1}{2\pi i}\int_\Gamma\frac{g(\zeta)\,d\zeta}{\Phi^{n+1}(\zeta)}
=-\frac{1}{2\pi i}\int_\Gamma\frac{g(\zeta)\,d\zeta}{(\Phi^{n+1}\circ y)(\zeta)},$$ where $g(\zeta):=P(\zeta)/(\zeta-z)$. Now, the function $1/\Phi^{n+1}\circ y$ is continuous on $\overline{G}$, and its $\partial/\partial_{\overline{z}}$ derivative belongs to $L^2(G)$; see (\[eq:Phip/Phin\]) below. Hence, from Green’s formula we have that $$\begin{aligned}
\label{eq:P/Phin}
\frac{P(z)}{\Phi^{n+1}(z)} &= -\frac{1}{\pi}\,\int_G
\left[\frac{g(\zeta)}{(\Phi^{n+1}\circ y)(\zeta)}\right]_{\overline{\zeta}}dA(\zeta)\nonumber \\
&= \frac{n+1}{\pi}\,\int_G
g(\zeta)\,\frac{\Phi^\prime(y(\zeta))\,y_{\overline{\zeta}}}{(\Phi^{n+2}\circ y)(\zeta)}dA(\zeta),\end{aligned}$$ where we made use of the fact that $g$ is analytic on $\overline{G}$. Next, using (\[eq:yzJ\]) it is readily seen that $$\begin{aligned}
\label{eq:Phip/Phin}
\int_G\frac{|\Phi^\prime(y(\zeta))|^2\,|y_{\overline{\zeta}}|^2}{|(\Phi^{n+2}\circ y)(\zeta)|^2} dA(\zeta)
&\le \frac{-1}{1-k^2}\int_G\frac{|\Phi^\prime(y(\zeta))|^2\,
J(y(\zeta))}{|(\Phi^{n+2}\circ y)(\zeta)|^2}dA(\zeta)\nonumber \\
&=\frac{1}{1-k^2}\int_\Omega\frac{|\Phi^\prime(t)|^2\,dA(t)}{|\Phi^{n+2}(t)|^2}\nonumber \\
&=\frac{1}{1-k^2}\int_\Delta\frac{dA(w)}{|w^{n+2}|^2}
=\frac{1}{1-k^2}\frac{\pi}{(n+1)}.\end{aligned}$$ Obviously, $$\int_G|g(\zeta)|^2dA(\zeta)
\le\frac{\|P\|_{L^2(G)}^2}{\left(\operatorname{dist}(z,\Gamma)\right)^2},$$ and the result (\[eq:PolyLemma\]) follows from (\[eq:Phip/Phin\]) and the application of the Cauchy-Schwarz inequality to the integral in (\[eq:P/Phin\]).
Proofs for piecewise analytic boundary {#proofs-pa}
======================================
We recall our assumption that $\Gamma$ consists of $N$ analytic arcs, which meet at corner points $z_j$, $j=1,\ldots,N$, forming there exterior angles $\omega_j\pi$, with $0<\omega_j<2$.
The basic idea underlying the work in this section is simple. Extend, using Schwarz reflection, $\Phi$ across each arc of $\Gamma$ inside $G$, so that this extension is conformal in the exterior of a piecewise analytic Jordan curve $\Gamma^\prime$, which shares with $\Gamma$ the same corners $z_j$ and otherwise lies in $G$. $\Gamma^\prime$ can be chosen so that $\Phi$ is analytic on $\Gamma^\prime$, apart from $z_j$. Hence, the four representations (\[eq:FnIntRep\])–(\[eq:EnIntRep\]) and (\[eq:GnIntRep\])–(\[eq:HnIntRep\]) remain valid if $\Gamma$ is deformed to $\Gamma^\prime$. Next, divide $\Gamma^\prime$ into two parts: a part $l$ containing arcs emanating from the corners $z_j$, and a part $\tau$ constituting the complement $\Gamma^\prime\setminus l$, so that there exists a compact set $B:=B(\Gamma)$ of $G$ which contains $\tau$. When $\zeta\in\tau$, $\Phi(\zeta)^n$ decays geometrically to zero, i.e., $|\Phi(\zeta)|^n=O(\rho^n)$, for some $\rho:=\rho(\Gamma)<1,$ and therefore its own contribution is negligible, when compared with the contribution of $\Phi(\zeta)^n$, for $\zeta\in l$. To make things more precise, we assume (as we may) that $l$ is formed by linear segments, and we number these two segment meeting at $z_j$ by $l_j^i$, $i=1,2$; see Figure \[fig:Gammapr\].
![The two decompositions: $\Gamma^\prime=l\cup\tau$ and $\Omega=\Omega_1\cup\Omega_2$.[]{data-label="fig:Gammapr"}](decoOmeGamma)
In the sequel, we make extensive use of the following four inequalities:
\[rem:Lehman\] For any $\zeta\in l_j^i$ there holds that:
1. $\displaystyle{|\Phi(\zeta)-\Phi(z_j)|\ge c\,|\zeta-z_j|^{1/\omega_j}}$;
2. $\displaystyle{|\Phi^\prime(\zeta)|\le c\,|\zeta-z_j|^{1/\omega_j-1}}$;
3. $\displaystyle{|\Phi(\zeta)|\le 1-c\,|\zeta-z_j|^{1/\omega_j}}$;
4. $\displaystyle{\operatorname{dist}(\zeta,\Gamma)\ge c\,|\zeta-z_j|}$.
(In Remark \[rem:Lehman\], and below, we use the symbol $c$ generically in order to denote positive constants, possibly different ones, that depend on $\Gamma$ only.)
The inequalities (i) and (ii) emerge from Lehman’s asymptotic expansions for conformal mappings, near an analytic corner [@Lehman]. The third inequality follows easily from (i), because reflection preserves angles. Finally, (iv) is a simple fact of conformal mapping geometry.
Proof of Theorem \[thm:HnOmgae\] {#sec:proof-thm:HnOmgae}
--------------------------------
The proof goes along similar lines as those taken in [@Ga01] for deriving an estimate for $F_n(z)$ in $G$, with one significant difference, though. Here $z$ lies in $\Omega$, rather than $G$, and thus $z$ is allowed to tend to $\Gamma$ without having to alter the curve $\Gamma^\prime$. As a consequence, the set $B$ defined above does not depend on $z$, and thus $\operatorname{dist}(z,\tau)\ge\operatorname{dist}(z,B)>\operatorname{dist}(\Gamma,B)=c(\Gamma)$.
The details are as follows: From the discussion above, it is easy to see that, for $z\in\Omega$, $$\begin{aligned}
\label{eq:Hnparts}
H_n(z)&=\frac{1}{2\pi i}
\int_{\Gamma^\prime}\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta\nonumber\\
&=\frac{1}{2\pi i}\sum_j\int_{l_j^1\cup l_j^2}
\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta
+\frac{1}{2\pi i}\int_{\tau}\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta \nonumber\\
&=\frac{1}{2\pi i}\sum_j\int_{l_j^1\cup l_j^2}\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta
+O(\rho^n),\end{aligned}$$ for some $\rho:=\rho(\Gamma)<1$, independent of $z$. Hence, we only need to estimate the integral $$I_j^i:=\int_{l_j^i}\frac{\Phi^n(\zeta)\Phi^\prime(\zeta)}{\zeta-z}\,d\zeta.$$
Let $s$ denote the arclength on ${l_j^i}$ measured from $z_j$. Then, Remark \[rem:Lehman\] yields the following two inequalities, which hold for any $\zeta\in l_j^i$: $$\label{eq:InePhi}
|\Phi(\zeta)|\le 1-cs^{1/\omega_j}<\exp(-cs^{1/\omega_j})\quad\textup{and}\quad
|\Phi^\prime(\zeta)|\le cs^{1/\omega_j-1}.$$ Since $1/\omega_j>1/2$, these imply $$\label{eq:Iji}
|I_j^i|\le\frac{c}{\operatorname{dist}(z,\Gamma)}\int_0^\infty \textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}\,ds=\frac{c\,\omega_j}{\operatorname{dist}(z,\Gamma)}\,\frac{1}{n},$$ and the required estimate (\[eq:HnOmega\]) follows from (\[eq:Hnparts\]).
The next result is needed in establishing Theorem \[thm:epsn\].
\[lem:final\] With $\omega\in(0,2]$ and $k\in\mathbb{N}$, set $\delta:=k^{-\omega}$ and let $$\label{eq:Iome-del-k}
I(\omega,k):=\int_0^{\delta}\left[\int_r^\infty
{\textup{e}^{-ks^{1/\omega}}s^{1/\omega-2}}\,ds \right]^2rdr.$$ Then, $$\label{eq:lemfinal}
I(\omega,k)\le\frac{c}{k^2}.$$
(In the statement and proof of Lemma \[lem:final\] the positive constants $c$ depend on $\omega$ only.)
We consider separately the four complementary cases: (I) $\omega=1$, (II) $0<\omega<1$, (III) $\omega=2$ and (IV) $1<\omega<2$.
*Case* (I): $\omega=1$. Note, $$I(1,k)=\int_0^{\delta}\left[\int_r^\infty\frac{\textup{e}^{-ks}}{s}\,ds \right]^2rdr
=\int_0^{\delta}E_1^2(kr)\,rdr,$$ where $E_1(x)$, denotes the exponential integral $E_1(x):=\int_x^\infty{t^{-1}}{\textup{e}^{-t}}\,dt$, with $x>0$. Using the formula $\int_0^\infty E_1^2(x)dx=2\log 2$ we thus have $$I(1,k)\le\delta\int_0^{\delta}E_1^2(kr)dr<\frac{\delta}{k}\int_0^\infty E_1^2(x)dx=\frac{c}{k^2}.$$
*Case* (II): $0<\omega<1$. Now $1/\omega>1$. Consequently, for $r>0$, $$\int_r^\infty\textup{e}^{-ks^{1/\omega}}s^{1/\omega-2}\,ds\le
\int_0^\infty\textup{e}^{-ks^{1/\omega}}s^{1/\omega-2}\,ds=\omega\,\Gamma(1-\omega)\,k^{\omega-1},$$ where $\Gamma(x):=\int_0^\infty t^{x-1}\textup{e}^{-t}dt$ denotes the Gamma function with argument $x>0$. This yields $$\label{eq:I-CaseII}
I(\omega,k)\le c\,\frac{\delta^2}{k^{2(1-\omega)}}=\frac{c}{k^2}.$$
*Case* (III): $\omega=2$. We note first the formula, valid for $r>0$, $$\int_r^\infty\textup{e}^{-ks^{1/2}}s^{-3/2}\,ds=
2\left({\textup{e}^{-k\,r^{1/2}}}{r^{-1/2}}-k\,E_1(k\,r^{1/2})\right).$$ Therefore, $$I(2,k)< c\,\int_0^\infty\textup{e}^{-2k\,r^{1/2}}dr
+ c\,k^2\int_0^\infty E_1^2(k\,r^{1/2})\,rdr=\frac{c}{k^2}+k^2\frac{c}{k^4}=\frac{c}{k^2}.$$
*Case* (IV): $1<\omega<2$. The result for $1<\omega<2$ can be established as a special case of $\omega=1$ and $\omega=2$. To see this set $h(\omega,s):={\textup{e}^{-ks^{1/\omega}}s^{1/\omega-2}}$ and split the integral from $r$ to $\infty$ in (\[eq:Iome-del-k\]) into three parts: $$\int_r^\infty h(\omega,s)\,ds
=\int_r^\delta h(\omega,s)\,ds+\int_\delta^1 h(\omega,s)\,ds+\int_1^\infty h(\omega,s)\,ds.$$ Next, observe that if $s\in(0,\delta)\cup(1,\infty)$, then $h(\omega,s)$ is an increasing function of $\omega$, hence $h(\omega,s)\le h(2,s)$. At the other hand, when $s\in(\delta,1)$, then $h(\omega,s)$ is a decreasing function of $\omega$, thus $h(\omega,s)\le h(1,s)$.
Summing up, we therefore have $$\int_r^\infty h(\omega,s)\,ds
\le\int_r^\infty \textup{e}^{-ks^{1/2}}s^{-3/2}\,ds+\int_r^\infty \frac{\textup{e}^{-ks}}{s}\,ds,$$ and the result (\[eq:lemfinal\]) follows easily using the estimates given in Cases (I) and (III).
Proof of Theorem \[thm:epsn\] {#sec:proof-thm:epsn}
-----------------------------
We choose positive quantities $$\label{eq:deltaj}
\delta_j=\delta_{n,j}:=c\,{n^{-\omega_j}},\quad j=1,\ldots,N,$$ where $c$ is small enough so that any two of the $N$ domains $\Omega_1^{j}:=\{z\in\Omega:|z-z_j|<\delta_j\}$, are disjoint from each other. Next, we set $\Omega_{1}:=\cup_j^N\Omega_1^{j}$ and split $\Omega$ into two parts $\Omega_{1}$ and $\Omega_{2}$; see Figure \[fig:Gammapr\].
Using this partition of $\Omega$, we express $\|H_n\|^2_{L^2(\Omega)}$ as the sum of two integrals over $\Omega_1$ and $\Omega_2$. This gives $$\begin{aligned}
\label{eq:I1+I2}
\|H_n\|^2_{L^2(\Omega)}&=\int_{\Omega_{1}}|H_n(z)|^2dA(z)
+\frac{1}{(n+1)^2}\int_{\Omega_{2}}|E^\prime_{n+1}(z)|^2dA(z)\nonumber\\
&=:J_1(n)+J_2(n),\end{aligned}$$ where we made use of (\[eq:GnFn+1\]). Hence, deriving the estimate (\[eq:epsn\]) it now amounts to showing that: (a) $J_1(n)=O(1/n^2)$ and (b) $J_2(n)=O(1/n^2)$.
**(a)** Let $$T_j(n):=\int_{\Omega_{1}^j}|H_n(z)|^2dA(z),\quad j=1,\ldots,N,$$ so that, $$\label{eq:J1=sumint}
J_1(n)=\sum_{j=1}^NT_j(n).$$
With $z\in\Omega_1^j$, set $r:=|z-z_j|$ and observe that, in view of Remark \[rem:Lehman\] (iv), $|\zeta-z|\approx s+r$, if $\zeta\in l_j^1\cup l_j^2$, while $|\zeta-z|\ge c$, if $\zeta\in l_k^1\cup l_k^2$, with $k\ne j$, where $s$ denote the arclength on ${l_j^i}$ measured from $z_j$. Consequently, since $\Omega_1^j\subset\{z:|z-z_j|<\delta_j\}$, we obtain using (\[eq:Hnparts\]) and (\[eq:InePhi\]): $$\begin{aligned}
\label{eq:Tjn-le}
T_j(n)&\le
c\int_0^{\delta_j}\left[\int_0^\infty\frac{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}}{r+s}ds
+\sum_{k\ne j}\int_0^\infty\textup{e}^{-cns^{1/\omega_k}}s^{1/\omega_k-1}ds\right]^2rdr \nonumber \\
&\le c\int_0^{\delta_j}
\left[\int_0^\infty\frac{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}}{r+s}ds \right]^2rdr \nonumber\\
&+c\int_0^{\delta_j}\left[\sum_{k\ne j}\int_0^\infty\textup{e}^{-cns^{1/\omega_k}}s^{1/\omega_k-1}ds\right]^2rdr.\end{aligned}$$ Since $$\int_0^\infty\textup{e}^{-cns^{1/\omega_k}}s^{1/\omega_k-1}ds=\frac{\omega_k}{c\, n},$$ it follows from (\[eq:deltaj\]) that $$\label{eq:Tjn-le-1}
\int_0^{\delta_j}\left[\sum_{k\ne j}\int_0^\infty\textup{e}^{-cns^{1/\omega_k}}s^{1/\omega_k-1}ds\right]^2rdr
\le\frac{c}{n^2}\int_0^{\delta_j}rdr\le\frac{c}{n^{2(1+\omega_j)}}.$$ Next by splitting the integral on $(0,\infty)$ into the two parts $(0,r)$ and $(r,\infty)$ we get that $$\begin{aligned}
\int_0^{\delta_j}\left[\int_0^\infty\frac{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}}{r+s}ds\right]^2rdr
&\le
c\int_0^{\delta_j}\left[\int_0^r\frac{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}}{r}ds \right]^2rdr\\
&+c\int_0^{\delta_j}\left[\int_r^\infty{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-2}}ds \right]^2rdr.\end{aligned}$$ Now we use the estimate $$\int_0^r\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}\,ds=\frac{c}{n}(1-\textup{e}^{-cnr^{1/\omega_j}})
<c\,r^{1/\omega_j},$$ and the result of Lemma \[lem:final\] to deduce, respectively, $$\label{eq:last}
\int_0^{\delta_j}\left[\int_0^r\frac{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-1}}{r}ds \right]^2rdr
<c\,\delta_j^{2/\omega_j}=c\,\frac{1}{n^2}$$ and $$\int_0^{\delta_j}\left[\int_r^\infty{\textup{e}^{-cns^{1/\omega_j}}s^{1/\omega_j-2}}ds \right]^2rdr
\le c\,\frac{1}{n^2}.$$
Summing up we conclude from (\[eq:Tjn-le\]) that $$T_j(n)\le c\,\frac{1}{n^2},\quad\text{for }j=1,\ldots,N,$$ which, in view of (\[eq:J1=sumint\]), leads to the required estimate $$\label{eq:I1le1/n}
J_1(n)\le\frac{c}{n^2}.$$
**(b)** By using Cauchy’s integral formula for the derivative in (\[eq:EnIntRep\]) and arguing as in Section \[sec:proof-thm:HnOmgae\] we obtain, for $z\in\Omega$, that $$\begin{aligned}
\label{eq:En+1Om2}
E^\prime_{n+1}(z)&=&\frac{1}{2\pi i}
\int_{\Gamma^\prime}\frac{\Phi^{n+1}(\zeta)}{(\zeta-z)^2}\,d\zeta\nonumber\\
&=& \frac{1}{2\pi i}\sum_j\int_{l_j^1\cup l_j^2}\frac{\Phi^{n+1}(\zeta)}{(\zeta-z)^2}\,d\zeta
+O(\rho^n),\end{aligned}$$ with $\rho:=\rho(\Gamma)<1$ independent of $z$.
Assume now that $z\in\Omega_2$ and $\zeta\in l_j^1\cup l_j^2$, $j=1,\ldots,N$. Then, the triangle inequality and Remark \[rem:Lehman\] (iv) imply that $|\zeta-z|\ge c\,|z-z_j|$. Thus, by using (\[eq:InePhi\]) we get $$\begin{aligned}
\left|\int_{l_j^1\cup l_j^2}\frac{\Phi^{n+1}(\zeta)}{(\zeta-z)^2}\,d\zeta\right|
\le\frac{c}{|z-z_j|^2}\int_0^\infty \textup{e}^{-cns^{1/\omega_j}}ds
=\frac{c\,\Gamma(\omega_j)}{|z-z_j|^2}\,\frac{1}{n^{\omega_j}}.\end{aligned}$$ This, in conjunction with (\[eq:En+1Om2\]), leads to the estimate $$\int_{\Omega_2}|E^\prime_{n+1}(z)|^2dA(z)
\le c\,\sum_j\frac{1}{n^{2\omega_j}}\int_{\Omega_2}\frac{dA(z)}{|z-z_j|^4}.$$
Finally, since $\Omega_2\subset\{z:|z-z_j|\ge\delta_j\}$, we have from (\[eq:deltaj\]) that $$\begin{aligned}
\label{eq:cannotdobetter}
\int_{\Omega_{2}}|E^\prime_{n+1}(z)|^2dA(z)
&\le c\sum_j\frac{1}{n^{2\omega_j}}\int_{|z-z_j|>\delta_j}\frac{dA(z)}{|z-z_j|^4}\nonumber\\
&=c\sum_j\frac{1}{n^{2\omega_j}\,\delta_j^2}=c,\end{aligned}$$ and this, in view of the definition of $J_2(n)$ in (\[eq:I1+I2\]), yields the required estimate $$\label{eq:I2le1/n}
J_2(n)\le\frac{c}{n^2}.$$
\[rem:deltaj\] It is interesting to note that the choice for $\delta_j$ given by (\[eq:deltaj\]) keeps the estimates (\[eq:I1le1/n\]) and (\[eq:I2le1/n\]) in balance, in the sense that any other choice for $\delta_j$ will result to a weaker estimate for the decay of $\|H_n\|_{L^2(\Omega)}$, as a comparison of (\[eq:I-CaseII\]) and (\[eq:last\]) with (\[eq:cannotdobetter\]) shows.
Proof of the main theorems {#proofs-main}
==========================
The assumption of the theorem implies that $\Gamma$ is quasiconformal and rectifiable. Hence, by comparing (\[eqinthm:finelambdan\]) with (\[eq:alphabetaeps\]) we see that $$\label{eq:albeep}
\alpha_n=\beta_n+\varepsilon_n$$ and the result (\[eqinthm:finelambdanii\]) emerges immediately in view of Theorem \[thm:betan\] and Corollary \[cor:qn-1L2\].
Theorem \[thm:finelambdan\] implies that $$\label{eq:lam/gam}
\frac{\lambda_n}{\gamma^{n+1}}=\sqrt{\frac{n+1}{\pi}}\left\{1+\xi_n\right\},\quad n\in\mathbb{N},$$ where $$\label{eq:lam/gam-2}
0\le\xi_n\le c_1(\Gamma)\,\frac{1}{n}.$$ Therefore, from (\[eq:pnPhinPhip\]) we have, for $z\in\Omega$, that $$p_n(z)=\sqrt{\frac{n+1}{\pi}}\,\Phi^n(z)\Phi^\prime(z)
\left\{1+\xi_n\right\}\left\{1+\frac{H_n(z)}{\Phi^n(z)\Phi^\prime(z)}
-\frac{q_{n-1}(z)}{\Phi^n(z)\Phi^\prime(z)}\right\},$$ which, in comparison with (\[eqinthm:finepn\]), gives the following explicit expression for the error $A_n(z)$: $$\label{eq:A_-esti}
A_n(z)=\xi_n+\left\{1+\xi_n\right\}\frac{1}{\Phi^\prime(z)}\left\{\frac{H_n(z)}{\Phi^n(z)}
-\frac{q_{n-1}(z)}{\Phi^n(z)}\right\}.$$ The required result (\[eqinthm:finepnii1\]) then emerges by using the estimates (\[eq:lam/gam-2\]), (\[eq:HnOmega\]) and (\[eq:qn-in-Omega\]), for $\xi_n$, $H_n(z)$ and $q_{n-1}(z)$, respectively.
Immediately from (\[eq:albeep\]) and Theorem \[lem:epsnge\], since $\beta_n\ge 0$.
Applications {#sec:appl}
============
Strong asymptotics for orthogonal polynomials with respect to measures supported on the real line have played a crucial role in the development of the theory of orthogonal polynomials in $\mathbb{R}$. In order to argue that this would be the case for Bergman polynomials as well, we present in briefly a number of applications based on the strong asymptotics of Section \[section:intro\] and the associated theory developed in Sections \[sec:faber\]–\[proofs-pa\].
Zeros of the Bergman polynomials {#subsec:zeros}
--------------------------------
A well-known result of Fejer asserts that *all the zeros of* $\{p_n(z)\}_{n\in\mathbb{N}}$, *are contained on the convex hull* $\textup{Co}(\overline{G})$ of $\overline{G}$. This was refined by Saff [@Sa90] to the interior of $\textup{Co}(\overline{G})$. To these it should be added a result of Widom [@Wi67] to the effect that, *on any closed subset $B$ of $\Omega\cap\textup{Co}(\overline{G})$ and for any $n\in\mathbb{N}$, the number of zeros of $p_n(z)$ on $B$ is bounded independently of $n$*. This of course, doesn’t preclude the possibility that, if $B\neq\emptyset$, then $p_n(z)$ has a zero on $B$, for every $n\in\mathbb{N}$. The next theorem, which is a simple consequence of Theorem \[thm:finepn\], shows that, under an additional assumption on $\Gamma$, the zeros of the sequence $\{p_n(z)\}_{n\in\mathbb{N}}$ cannot be accumulated in $\Omega$.
\[thn:zeros\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any closed set $B\subset\Omega$, there exists $n_0\in\mathbb{N}$, such that for $n\ge n_0$, $p_n(z)$ has no zeros on $B$.
Weak asymptotics {#subsec:weak}
----------------
The important class **Reg** of measures of orthogonality was introduced by Stahl and Totik in [@StTobo Def. 3.1.2]. Since the area measure $dA$ on $G$ belongs to **Reg**, it follows that $$\label{eq:St-To:weak}
\lim_{n \to\infty}|p_n(z)|^{1/n}=|\Phi(z)|,$$ locally uniformly in $\overline{\mathbb{C}}\setminus\textup{Co}(\overline{G})$; see [@StTobo Thm 3.1.1(ii)]. The next theorem shows how this result can be made more precise, under an additional assumption on the boundary.
Assume that $\Gamma$ is piecewise analytic without cusps. Then, $$\lim_{n \to\infty}|p_n(z)|^{1/n}=|\Phi(z)|,$$ locally uniformly in $\Omega$.
At once, after utilizing Theorem \[thn:zeros\] into [@ST Thm III.4.7].
For an account on weak asymptotics for Bergman polynomials defined by a system of disjoint Jordan curves we refer to [@GPSS Prop. 3.1].
Ratio asymptotics {#sec:ratio}
-----------------
The following two corollaries are simple consequences of Theorems \[thm:finelambdan\] and \[thm:finepn\].
\[cor:ratioln\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:ratioln1}
\sqrt{\frac{n+1}{n+2}}\frac{\lambda_{n+1}}{\lambda_n}=\gamma+\varsigma_n,$$ where $$\label{eq:ratioln2}
|\varsigma_n|\le c_1(\Gamma)\,\frac{1}{n}.$$
\[cor:ratiopn\] Under the assumptions of Corollary \[cor:ratioln\], for any $z\in\Omega$ and sufficiently large $n\in\mathbb{N}$, it holds that $$\label{eq:ratiopn1}
\sqrt{\frac{n+1}{n+2}}\frac{p_{n+1}(z)}{p_n(z)}=\Phi(z)\left\{1+B_n(z)\right\},$$ where $$\label{eq:ratiopn2}
|B_n(z)|\le \frac{c_2(\Gamma)}{\operatorname{dist}(z,\Gamma)|\Phi^\prime(z)|}\,\frac{1}{\sqrt{n}}
+c_3(\Gamma)\,\frac{1}{n}.$$
\[rem:ratio\] The ratio asymptotics above are derived as consequence of Theorems \[thm:finelambdan\] and \[thm:finepn\]. Thus, we are obliged to assume that $\Gamma$ is piecewise analytic without cusps. Based, however, on substantial numerical evidence (an instance is shown in Table \[tab:cap-hypo\] below) we believe that the ratio asymptotics hold, in the sense that $\varsigma_n=o(1)$ and $B_n(z)=o(1)$, under weaker assumptions on $\Gamma$.
Stability of the Arnoldi GS for polynomials {#subsec:ArnoldiGS}
-------------------------------------------
Let $\mu$ be a (non-trivial) finite Borel measure supported on a compact (and infinite) subset $K$ of the complex plane, and let $\{p_n(z,\mu)\}_{n=0}^\infty$ denote the associated sequence of orthonormal polynomials $$p_n(z,\mu):=\lambda_n(\mu)z^n+\cdots,\quad \lambda_n(\mu)>0, \quad n=0,1,2,\ldots,$$ generated by the inner product $$\langle f,g\rangle_\mu:=\int f(z)\overline{g(z)}d\mu(z).$$
A standard way to construct the sequence $\{p_n(z,\mu)\}_{n=0}^\infty$, even to prove its existence and uniqueness, is by using the Gram-Schmidt (GS) process. This process is designed to turn, in iterative fashion, any polynomial sequence $\{P_n\}_{n=0}^\infty$ into an orthonormal sequence. The main ingredients in the computation are the complex moments $\langle z^m,z^k\rangle_\mu$. The conventional to way apply the GS process is by choosing the monomials as the starting up sequence, that is by setting $P_n(z)=z^n$. Indeed, this was suggested (see, e.g., [@He-V3 §18.3–18.4]) and was eventually used (see, e.g., [@PW86]) by people working in Numerical Conformal Mapping, where the need for constructing orthonormal polynomials arises from the application of the Bergman kernel method and its variants.
By the *Arnoldi* GS we mean the application of the GS process in the following way: At the $k$-step, where the orthonormal polynomial $p_k$ is to be constructed, use the polynomials $\{p_0,p_1,\ldots,p_{k-1},zp_{k-1}\}$, rather than the monomials, as the starting up sequence.
Regarding the stability properties of the Arnoldi GS, we note that it is not difficult to show that $$\label{eq:insta-esti}
1\le I_n\le \|z\|_K\frac{\lambda_{n-1}^2(\mu)}{\lambda_n^2(\mu)},$$ for the instability indicator $$\label{eq:In-def}
I_n:=\frac{\|P_n\|^2_{L^2{(G)}}}
{{\min_{P\in\text{span}(S_{n-1})}}\|P_n-P\|^2_{L^2{(G)}}}, \quad n\in\mathbb{N}$$ introduced by Taylor in [@Ta78] for the purpose of measuring the instability of the application of the GS process in orthonormalizing the set of polynomials $S_n:=\{P_0,P_1,\ldots,P_n\}$. Note that $I_n=1$, if $S_n$ is already an orthonormal set, while $I_n=\infty$, if $S_n$ is linearly depended.
In view of Corollary \[cor:ratioln\], the estimate (\[eq:insta-esti\]) implies that the Arnoldi GS process for computing the Bergman polynomials of $G$ is stable, in the sense that the instability indicator $I_n$ does not increase (in fact remains uniformly bounded) with $n$. This is in sharp contrast with the conventional GS, where $I_n$ *increases geometrically fast* with $n$. More specifically, the following estimate for the conventional GS was derived in [@PW86 Thm 3.1]: $$\label{eq:insta-esti-PW}
c_4(\Gamma)L^{2n}\le I_n\le c_5(\Gamma)L^{2n},$$ where $L:=\|z\|_\Gamma/{{\mathrm{cap}}(\Gamma)}$. Note that $L>1$, unless $G$ is a disk centered at the origin, where $L=1$. For a comprehensive account on the damaging effects of the conventional GS process to the computation of Bergman polynomials we refer to [@PW86].
It is interesting to note that although Arnoldi’s original paper [@Ar51] appeared in 1951, and the Arnoldi implementation of the GS process was used in Numerical Linear Algebra since then, we first encountered its implementation in connection with the computation of orthogonal polynomials much latter in [@GrRe-87], where it was proposed for the computation of Szegő polynomials without reference, however, to its stability properties.
Computation of $\Phi(z)$ and $\text{cap}(\Gamma)$ {#sec:comp-cap}
-------------------------------------------------
Since $\textup{cap}(\Gamma)=b=1/\gamma$, Corollary \[cor:ratioln\] provides the means for computing approximations to the capacity of $\Gamma$ by using only the leading coefficients of the Bergman polynomials. Similarly, Corollary \[cor:ratiopn\] suggests a simple numerical method for computing approximations to the conformal map $\Phi(z)$. This is quite appealing, in the sense that the Bergman polynomials, alone, suffice to provide approximations to both interior conformal map $G\to\mathbb{D}$ (via the well-known Bergman kernel method) and exterior conformal map $\Omega\to\Delta$, associated with the same Jordan curve. We refer to [@LySt] for the current state of the convergence theory of the Bergman kernel method. Regarding the exterior map we propose here the following approximation algorithm.
[Approximation of Capacities and Exterior Conformal Maps]{}
1. Compute the complex moments $$\label{eq:mom}
\mu_{m,k}:=\langle z^m,z^k\rangle_G=\int_G z^m\overline{z}^k dA(z),\quad m,k=0,1,\ldots,n.$$
2. Employ the Arnoldi GS process to construct the Bergman polynomials $\{p_k\}_{k=0}^n$ using the moments $\mu_{mk}$.
3. Set $$\label{eq:approx-cap-Phi}
b^{(n)}:=\sqrt{\frac{n+1}{n}}\frac{\lambda_{n-1}}{\lambda_n}\quad
\text{and}\quad\Phi_n(z):=\sqrt{\frac{n}{n+1}}\frac{p_n(z)}{p_{n-1}(z)}.$$
4. Approximate $\textup{cap}(\Gamma)$ by $b^{(n)}$ and $\Phi(z)$ by $\Phi_n(z)$.
We demonstrate the performance of the above algorithm in the computation of capacities only. We do so by presenting numerical results for two examples: (a) the canonical square with boundary $\Pi_4$, discussed in Section \[subsec:coef-esti\] below, and (b) the 3-cusped hypocycloid with boundary $H_3$, defined by (\[eq:Psi-hypom\]) with $m=3$. We note that $H_3$ does not satisfy the requirements of Corollary \[cor:ratioln\]. The capacity of $\Pi_4$ is given explicitly in (\[eq:cap-square\]), while clearly, $\text{cap}(H_3)=1$. In both cases the complex moments are known explicitly. The details of the presentation are as follows:
Let $t_n$ denote the error in approximating the capacity, i.e., $$\label{eq:tn-dfn}
t_n:=b^{(n)}-{{\mathrm{cap}}(\Gamma)}.$$ Since ${{\mathrm{cap}}(\Gamma)}=b$, it follows from Corollary \[cor:ratioln\] that $$\label{eq:tn-decay}
|t_n|\le c(\Gamma)\frac{1}{n},\quad n\in\mathbb{N}.$$ In Tables \[tab:cap-square\]–\[tab:cap-hypo\] we report the computed values of $b^{(n)}$ and $t_n$, with $n$ varying from $100$ to $400$. We also report the values of the parameter $s$, which is designed to test the hypothesis that $|t_n|\approx 1/n^s$. All computations presented in this paper were carried out on a desktop PC, using the computing environment MAPLE in high precision. Thus, in view of the stability properties of the Arnoldi GS process discussed in Section \[subsec:ArnoldiGS\], we expect all the figures quoted in the tables to be correct.
The numbers listed on the tables show that the proposed algorithm constitutes a valid method for computing capacities. Is is interesting to note that in both cases the presented values of $b^{(n)}$ decay monotonically to the capacity. Also, the values of the parameter $s$ indicate clearly that for the case of the square $|t_n|\approx 1/n^2$. This behaviour can be explained if $\alpha_n\approx 1/n$, for the strong asymptotic error of the leading coefficient. For the case of the cusped hypocycloid however, no safe conclusions can be drawn for the behaviour of $t_n$ from the reported values on Table \[tab:cap-hypo\].
--------------- ------------- ---------- -----------------------------------------
$ $ $n$ $b^{(n)}$ $t_n$ $s$ ${\vphantom{\sum^{\sum^{\sum^N}}}}$
\*\[3pt\] 100 0.834640612 1.37e-05 -
110 0.834638233 1.14e-05 1.9902
120 0.834636420 9.58e-06 1.9911
130 0.834635009 8.16e-06 1.9918
140 0.834633888 7.04e-06 1.9924
150 0.834632982 6.14e-06 1.9930
160 0.834632341 5.39e-06 1.9934
170 0.834631626 4.78e-06 1.9938
180 0.834631111 4.26e-06 1.9942
190 0.834630674 3.83e-06 1.9945
200 0.834630301 3.46e-06 1.9949
--------------- ------------- ---------- -----------------------------------------
: Square: The approximation $b^{(n)}$ of $\text{cap}(\Pi_4)=0.834\,626\,841\cdots$.[]{data-label="tab:cap-square"}
--------------- ------------- ---------- -----------------------------------------
$ $ $n$ $b^{(n)}$ $t_n$ $s$ ${\vphantom{\sum^{\sum^{\sum^N}}}}$
\*\[3pt\] 300 1.000117809 1.17e-04 -
310 1.000112347 1.12e-04 1.447
320 1.000107296 1.07e-04 1.448
330 1.000102615 1.02e-04 1.449
340 1.000098267 9.82e-04 1.449
350 1.000094219 9.42e-05 1.450
360 1.000090443 9.04e-05 1.451
370 1.000086914 8.69e-05 1.452
380 1.000083610 8.36e-05 1.453
390 1.000080511 8.05e-05 1.454
400 1.000077600 7.76e-05 1.455
--------------- ------------- ---------- -----------------------------------------
: Hypocycloid: The approximation $b^{(n)}$ of $\text{cap}(\Pi_4)=1$.[]{data-label="tab:cap-hypo"}
Based on the important applications of the ratio asymptotics outlined above (see also Section \[sec:ftrr\]) we reckon that the solution of the following problem will be of significance in developing further the theory of orthogonal polynomials in the complex plane.
Characterize all the measures of orthogonality $\mu$, with $\text{supp}(\mu)=K$, for which it holds: $$\label{eq:Ratdef}
\lim_{n\to\infty}\frac{\lambda_{n+1}(\mu)}{\lambda_n(\mu)}=\frac{1}{\textup{cap}(K)}.$$
Since the property $\mu\in\,$**Reg** is equivalent to $$\label{eq:Regdef}
\lim_{n\to\infty}\lambda_n^{1/n}(\mu)=\frac{1}{\textup{cap}(K)};$$ see [@StTobo Thm 3.1.1], it follows that the measures satisfying (\[eq:Ratdef\]) form a subclass of **Reg**. We note, however, that there are known instances where the limit points of the sequence $\{\lambda_{n+1}(\mu)/\lambda_n(\mu)\}_{n\in\mathbb{N}}$ constitute a finite set, as in the case of Bergman polynomials defined on a system of disjoint symmetric lemniscates (see [@GPSS §7]), or where they fill up a whole interval, as in the case of Szegő polynomials defined on a system of disjoint smooth Jordan curves (see [@Wi69 Thm 9.2]).
Finite recurrence relations and Dirichlet problems {#sec:ftrr}
--------------------------------------------------
\[def:ftrr\] We say that the polynomials $\{p_n\}_{n=0}^\infty$ satisfy an $(M+1)$-*term recurrence relation*, if for any $n\geq M-1$, $$zp_n(z)=a_{n+1,n}p_{n+1}(z) + a_{n, n} p_n(z) + \cdots +
a_{n-M+1, n} p_{n-M+1}(z).$$
A direct application of the ratio asymptotics for $\{p_n\}_{n\in\mathbb{N}}$, given by Corollary \[cor:ratiopn\], leads to the next two theorems. These refine, respectively, Theorems 2.2 and 2.1 of [@KhSt], in the sense that they weaken the $C^2$-smoothness assumption on $\Gamma$. For their proof, it is sufficient to note that: (a) the two theorems are equivalent to each other and (b) the reason for assuming that $\Gamma$ is $C^2$-smooth in Theorem 2.2 of [@KhSt] was to ensure the ratio asymptotics of the Bergman polynomials; see [@KhSt §4 Rem. (i)].
\[thm:ftrr\] Assume that $\Gamma$ is piecewise analytic without cusps. If the Bergman polynomials $\{p_n\}_{n=0}^\infty$ satisfy an $(M+1)$-term recurrence relation, with some $M\ge 2$, then $M=2$ and $\Gamma$ is an ellipse.
\[thm:DP\] Let $G$ be a bounded simply-connected domain with Jordan boundary $\Gamma$, which is piecewise analytic without cusps. Assume that there exists a positive integer $M:=M(G)$ with the property that the Dirichlet problem $$\label{eq:DP}
\left\{
\begin{alignedat}{2}
&\Delta u=0\quad &&\text{in} \ \ G, \\
&u=\overline{z}^mz^n\quad&&\text{on}\ \ \Gamma,
\end{alignedat}
\right.$$ has a polynomial solution of degree $\le m(M-1)+n$ in $z$ and of degree $\le n(M-1)+m$ in $\overline{z}$, for all positive integers $m$ and $n$. Then $\Gamma$ is an ellipse and $M=2$.
Theorem \[thm:DP\] confirms a special case of the so-called Khavinson and Shapiro conjecture; see [@KL] for results reporting on the recent progress in this direction. We note that the equivalence between the two properties the Bergman polynomials of $G$ satisfy a finite-term recurrence relation“ and any Dirichlet problem in $G$, with polynomial data, possesses a polynomial solution” was first established in [@PuSt].
Shape recovery from partial measurements {#subsec:shape-rec}
----------------------------------------
Given a finite $n+1\times n+1$ section $$\label{eq:trun-mom}
[\mu_{m,k}]_{m,k=0}^n,\quad\mu_{m,k}:=\int_G z^m\overline{z}^k
dA(z),$$ of the infinite complex moment matrix $[\mu_{m,k}]_{m,k=0}^\infty$, associated with a bounded Jordan domain $G$, the *Truncated Moments Problem* consists of computing an approximation $\Gamma_n$ to its boundary $\Gamma$, by using only the data provided by (\[eq:trun-mom\]). Regarding existence and uniqueness, we note a result of Davis and Pollak [@DaPo] stating that the infinite matrix $[\mu_{m,k}]_{m,k=0}^\infty$ defines uniquely the curve $\Gamma$. Corollary \[cor:ratiopn\] and the discussion in Section \[subsec:ArnoldiGS\], regarding the stability of the Arnoldi GS process, suggest the following algorithm:
[Reconstruction from Moments Algorithm]{}
1. Use the Arnoldi GS process to construct the Bergman polynomials $\{p_k\}_{k=0}^n$ from the given complex moments $\mu_{mk}$, $m,k=0,1,\ldots,n$.
2. Compute the coefficients of the Laurent series expansion of the ratio $$\label{LauSerP}
\Phi_n(z):=\sqrt{\frac{n}{n+1}}\frac{p_n(z)}{p_{n-1}(z)}=\gamma^{(n)}z+\gamma_0^{(n)}+\frac{\gamma_1^{(n)}}{z}+
\frac{\gamma_2^{(n)}}{z^2}
+\cdots.$$
3. Revert the series (\[LauSerP\]) using the explicit method described in [@Fa-Ol99 p. 764]. This leads to: $$b^{(n)}:=1/\gamma^{(n)}=\sqrt{\frac{n+1}{n}}\frac{\lambda_{n-1}}{\lambda_n},\quad
b_0^{(n)}:=-b^{(n)}\gamma_0^{(n)}/\gamma^{(n)}$$ and $$\Psi_n(w):=b^{(n)}w+b_0^{(n)}+\frac{b_1^{(n)}}{w}+
\frac{b_2^{(n)}}{w^2}+\frac{b_3^{(n)}}{w^3}+\cdots+\frac{b_n^{(n)}}{w^n},$$ where $-k\, b_k^{(n)}/b^{(n)}$, $k=1,2,\ldots,n$, is the coefficient of $1/z$ in the Laurent series expansion of $\left[\Phi_n(z)/\gamma^{(n)}\right]^k$ about infinity.
4. Approximate $\Gamma$ by $\Gamma_n:=\{z:\,z=\Psi_n(e^{it}),\,t\in[0,2\pi]\,\}$.
For applications to the 2D image reconstruction arising from tomographic data we refer to [@GPSS]. Here we highlight the performance of the reconstruction algorithm by applying it to the recovery of three shapes, where the defining curves come from different classes: one analytic, one with corners and one with cusps, for which the theory of Section \[sec:ratio\] does not apply. In each case we start by computing a finite set of complex moments and then follow the four steps of the algorithm. We note that in all three examples the complex moments are known explicitly.
In Figures \[fig:ellipse\]–\[fig:cupsed-hypo\] we depict the computed approximation $\Gamma_n$ against the original curve $\Gamma$. Note that in the first two plots the fitting of the two curves is not far from being perfect. Even in the cusped case, pictured in Figure \[fig:cupsed-hypo\], the fitting is remarkably close, despite the low degree of the moment matrix used.
![Recovery of an ellipse, with $n=3$[]{data-label="fig:ellipse"}](reellipse.eps)
In Figure \[fig:ellipse\] we illustrate the reconstruction of an ellipse by using only the first 16 moments in (\[eq:trun-mom\]), i.e., by taking $n=3$.
![Recovery of a square, with $n=16$.[]{data-label="fig:square"}](resquare.eps){width="0.35\linewidth"}
In Figure \[fig:square\] we reconstruct a square by using the complex moments up to the degree $16$. We have chosen $n=16$, so that the result can be compared with the recovery of a square, as shown on page 1067 of [@GHMP] obtained using the *Exponential Transform Algorithm* of the opus cited. This is another reconstruction algorithm based on moments. Of course, for concluding results regarding the comparison of the two algorithms more experiments need to be contacted.
[![Recovery of a 3-cusped hypocycloid, with $n=10$ (left) and $n=20$ (right).[]{data-label="fig:cupsed-hypo"}](rehypocyca.eps "fig:")]{}
[![Recovery of a 3-cusped hypocycloid, with $n=10$ (left) and $n=20$ (right).[]{data-label="fig:cupsed-hypo"}](rehypocyc.eps "fig:")]{}
In order to show that our reconstruction algorithm works equally well for domains where the theory above does not apply, we use it for the recovery of the boundary $H_3$ of the 3-cusped hypocycloid defined by (\[eq:Psi-hypom\]) with $m=3$. The application of the algorithm with $n=10$ and $n=20$ is depicted in Figure \[fig:cupsed-hypo\].
Concluding, we note that the above algorithm is not suited for reconstructing unions of disjoint Jordan domains, in contrast to the *Archipelagos Reconstruction Algorithm* of [@GPSS]. At the other hand, the simplicity of the construction and the proximity of the two curves $\Gamma_n$ and $\Gamma$, shown in the figures, support that the proposed algorithm is more efficient when it comes to recovering single Jordan domains.
Coefficient estimates {#subsec:coef-esti}
---------------------
We recall the expansion (\[eq:Psi\]) of the inverse conformal mapping $\Psi:\Delta\to\Omega$ and note that $\Psi(w)/b$ belongs to the well-known class $\Sigma$ of univalent functions; see, e.g., [@Po75] and [@Du83].
The following result settles, in a certain sense, the associated coefficient problem for an important subclass of $\Sigma$. We refer to [@Du83 §4.9] for a comprehensive discussion of the coefficient problem for other subclasses of $\Sigma$.
\[th:bn-decay\] Assume that $\Gamma$ is piecewise analytic without cusps and let $\omega\pi$, $0<\omega<2,$ denote its smallest exterior angle. Then, there holds that $$\label{eq:bn-decay-1}
|b_n|\le c_1(\Gamma)\frac{1}{n^{1+\omega}}, \quad n\in\mathbb{N},$$ and the order $1+\omega$ of $1/n$ is sharp in the sense that for certain $\omega$, there exists a Jordan curve $\Gamma$ of the same class, such that $$\label{eq:bn-decay-2}
|b_n|\ge c_2(\Gamma)\frac{1}{n^{1+\omega}}, \quad\text{for infinitely many } n.$$
The estimate (\[eq:bn-decay-1\]) can be established by means of the tools developed in Section \[proofs-pa\]. More precisely, the following array of equations can be readily verified by using (\[eq:c\_1-2\]) and arguing as in Section \[sec:proof-thm:HnOmgae\]: $$\begin{aligned}
\label{eq:nbn-esti}
-nb_n
&=\frac{1}{2\pi i}\,\int_{L_R}\Phi^{n}(z)dz=\frac{1}{2\pi i}\,\int_{\Gamma^\prime}\Phi^{n}(z)dz\nonumber\\
&=\frac{1}{2\pi i}\sum_j\int_{l_j^1\cup l_j^2}{\Phi^n(\zeta)}\,d\zeta
+\frac{1}{2\pi i}\int_{\tau}{\Phi^n(\zeta)}\,d\zeta \nonumber\\
&=\frac{1}{2\pi i}\sum_j\int_{l_j^1\cup l_j^2}{\Phi^n(\zeta)}\,d\zeta
+O(\rho^n),\end{aligned}$$ for some $\rho:=\rho(\Gamma)<1$.
Hence, we only need to estimate the integral $$I_j^i:=\int_{l_j^i}{\Phi^n(\zeta)}\,d\zeta.$$ This can be done by working as in deriving (\[eq:Iji\]). Indeed, by using the estimate $$|\Phi(\zeta)|\le 1-cs^{1/\omega_j}<\exp(-cs^{1/\omega_j}),\quad \zeta\in l_j^i,$$ we obtain $$|I_j^i|\le c\int_0^\infty \textup{e}^{-cns^{1/\omega_j}}\,ds={c\,\omega_j}\Gamma(\omega_j)\,\frac{1}{n^{\omega_j}},$$ and the required result (\[eq:bn-decay-1\]) follows from (\[eq:nbn-esti\]), with $\omega:=\min_{j}\{\omega_j\}$.
An extremal domain, where (\[eq:bn-decay-2\]) holds true, is provided by the case where $\Gamma$ is the canonical square $\Pi_4$, with vertices at $1$, $i$, $-1$ and $-i$. In this case $\omega=3/2$, and by making use of the rotational symmetry of $\Pi_4$ it is easily seen that the Schwarz-Christoffel formula for the normalized conformal map $\Psi:\Delta\to\Omega$ takes the following expression: $$\begin{aligned}
\Psi(w)&=\text{cap}(\Pi_4)\int\left(1-\frac{1}{w}\right)^{\omega-1}\left(1-\frac{i}{w}\right)^{\omega-1}
\left(1+\frac{1}{w}\right)^{\omega-1}\left(1+\frac{i}{w}\right)^{\omega-1}dw\\
&=\text{cap}(\Pi_4)\int\left(1-\frac{1}{w^4}\right)^{\omega-1}dw,
$$ or, more explicitly, $$\Psi(w)=\text{cap}(\Pi_4)\left\{w+\sum_{k=1}^\infty(-1)^{k+1}\binom{a}{k}\frac{1}{4k-1}\frac{1}{w^{4k-1}}\right\},$$ where $a:=\omega-1=1/2$, and $\binom{a}{k}$ denotes the binomial coefficient. Hence, for $n=4k-l$, $k\in\mathbb{N}$ and $l\in\{0,1,2,3\}$, we have $$\label{eq:bn-square}
b_n= \left\{
\begin{array}{ll}
\text{cap}(\Pi_4)(-1)^{k+1}\binom{a}{k}\frac{1}{n}, &\text{if } l=1,\\
0, & \text{if } l\ne 1.
\end{array}
\right.$$ Now, using the properties of the Gamma function $\Gamma(z)$, it is easy to verify that $$\binom{a}{k}=\frac{(-1)^k}{\Gamma(-a)}\frac{\Gamma(k-a)}{\Gamma(k+1)}
=\frac{(-1)^k}{\Gamma(-a)}\left\{\frac{1}{k^{1+a}}+O\left(\frac{1}{k^{2+a}}\right)\right\},$$ and this, in conjunction with (\[eq:bn-square\]), provides the required behaviour $$\label{eq:bn_equiv}
|b_n|\asymp\frac{1}{n^{1+\omega}},$$ for $n=3,7,11,\ldots$.
Clearly, the above argument applies to any canonical polygon $\Pi_m$, with $m$-sides. In particular, (\[eq:bn\_equiv\]) holds true for any $\Pi_m$, $m\ge 3$, with $\omega=(m+2)/m$ and $n=km-1$, $k\in\mathbb{N}$. Thus, any $\Pi_m$ can serve as an extremal curve for the estimate (\[eq:bn-decay-1\]).
We note that, since $\Psi(1)=1$, it is not difficult to obtain the following expression for the capacity of $\Pi_4$, using the properties of hypergeometric functions: $$\label{eq:cap-square}
\text{cap}(\Pi_4)=\frac{\Gamma^2(1/4)}{4\pi^{3/2}}=0.834\,626\,841\,674\,072\cdots.$$
In the case where $\Gamma$ is allowed to have cusps we recall, from Section 1, the following estimate of Gaier [@Ga99 §4.1]: $$|b_n|\le c(\Gamma)\frac{1}{n}, \quad n\in\mathbb{N}.$$ This shows that the arguments on the first part of the proof of the theorem can be amended to cover the case of zero exterior angles but not of angles of opening $2\pi$.
A connection with Operator Theory {#subsec:OT}
---------------------------------
In a different reading, Theorem \[th:bn-decay\] brings in a connection with Operator Theory. To testify this, we consider the Toeplitz matrix $T_\Psi$ defined by the continuous extension of $\Psi(w)$ to the unit circle $\mathbb{T}:=\{w:|w|=1\}$. By this we mean the matrix $$\label{eq:Tmat}
{T_\Psi}:=\left[\begin{array}{cccccc}
b_{0}& b_{1}& b_{2}& b_{3}& b_{4}&\cdots\\
b& b_{0}& b_{1}& b_{2}& b_{3}&\cdots \\
0 & b& b_{0}& b_{1}& b_{2}&\cdots \\
0 & 0 & b& b_{0}& b_{1}&\cdots \\
0 & 0 & 0 & b& b_{0}&\cdots \\
\vdots& \vdots& \vdots& \ddots& \ddots&\ddots
\end{array}\right],$$ defined by the coefficients of $\Psi(w)$ in its Laurent series expansion (\[eq:Psi\]). If the boundary $\Gamma$ is piecewise analytic without cusps, then Theorem \[th:bn-decay\] implies that $\sum_{n=0}^\infty|b_n|<\infty$, and hence that the symbol $\Psi$ of the Toeplitz matrix $T_\Psi$ belongs to the Wiener algebra; see, e.g., [@Bo-Gr05 §1.2–1.5]. This property leads to very interesting conclusions. For instance, to the conclusion that $T_\Psi$ defines a bounded linear operator on the Hilbert space $l^2$ and that $$\label{eq:essspT}
\sigma_{ess}(T_\Psi)=\Gamma,$$ where we use $\sigma_{ess}(L)$ to denote the *essential spectrum* of a bounded linear operator $L$.
Consider next the multiplication by $z$ operator $\mathcal{M}:f\to zf$ (also known as the *Bergman shift operator*), defined on the Hilbert space $L_a^2(G)$. We note that $\mathcal{M}$ is a bounded linear operator on $L_a^2(G)$, such that $$\sigma_{ess}(\mathcal{M})=\Gamma;$$ see [@AxCoMcDo]. Hence, from (\[eq:essspT\]) it follows that $$\label{eq:esssp=}
\sigma_{ess}(\mathcal{M})=\sigma_{ess}(T_\Psi).$$ In a forthcoming paper [@SaSt2012], we employ results and tools from the present work to show that the connection between the two operators $\mathcal{M}$ and $T_\Psi$ is much more substantial.
In order to emphasize the importance of the Bergman shift operator $\mathcal{M}$ in the theory of orthogonal polynomials, we note that the proof of Theorem \[thm:ftrr\] relies heavily on the properties of $\mathcal{M}$; see [@PuSt] and [@KhSt]. Furthermore, we note that the stable Arnoldi GS process is based on the use of the polynomial $zp_{n-1}$, i.e., on the application of $\mathcal{M}$ to $p_{n-1}$.
The decay of the Bergman polynomials in $G$
-------------------------------------------
Here we refine the following estimate, which was derived in [@MSS p. 530] under the assumption that $\Gamma$ is piecewise analytic without cusps: For any compact subset $B$ of $G$ and for any $n\in\mathbb{N}$, it holds that $$\label{eq:pndecay-MSS}
|p_n(z)|\le\ c_1(\Gamma,B)\,\frac{1}{n^{s}},\quad z\in B,$$ where $$\label{eq:pndecay-MSSs}
s:=\min_{1\le j\le N}\{\omega_j/(2-\omega_j)\}.$$ (We use $c_j(\Gamma,B)$ to denote positive constants that depend only on $\Gamma$ and $B$.) Note that $s\to 0$, if $\omega_j\to 0$, for some $j$, and hence for such cases, (\[eq:pndecay-MSS\]) predicts a very slow decay for $p_n(z)$. The next theorem, however, shows that this decay cannot be slower than $O(1/\sqrt{n})$.
\[lem:pnG\] Assume that $\Gamma$ is piecewise analytic without cusps. Then, for any $n\in\mathbb{N}$, it holds that $$\label{eq:pnG}
|p_n(z)|\le{c_2(\Gamma,B)}\,\frac{1}{n^\sigma},\quad z\in B,$$ where $\sigma:=\max\{1/2,s\}$.
By using Cauchy’s formula for the derivative in (\[eq:FnIntRep\]) and by working as in the proof of Theorem \[th:bn-decay\], it is readily seen that $$\label{eq:GaierFn}
|F_{n+1}^\prime(z)|\le{c_3(\Gamma,B)}\,\frac{1}{n^\omega},\quad z\in B,$$ where $\omega\pi$ ($0<\omega<2$) is the smallest exterior angle of $\Gamma$. This, in view of (\[eq:GnFn+1\]), gives immediately $$\label{eq:Gnz}
|G_n(z)|\le {c_4(\Gamma,B)}\,\frac{1}{n^{1+\omega}},\quad z\in B.$$
Next, since $$|q_{n-1}(z)|\le\frac{\|q_{n-1}\|_{L^2(G)}}{\sqrt{\pi}\,{\operatorname{dist}(z,\Gamma)}},\quad z\in G;$$ see, e.g., [@Gabook87 p. 4], we obtain from Corollary \[cor:qn-1L2\] the estimate $$\label{eq:qnz}
|q_{n-1}(z)|\le {c_5(\Gamma,B)}\,\frac{1}{n},\quad z\in B.$$
Finally, from (\[eq:qndef\]), we have that $$|p_n(z)|\le\frac{\lambda_n}{\gamma^{n+1}}\left\{|G_n(z)|+|q_{n-1}(z)|\right\},$$ and this in view of (\[eq:lam/gam\])–(\[eq:lam/gam-2\]) and (\[eq:Gnz\])–(\[eq:qnz\]) yields $$\label{eq:pn-order-half}
|p_n(z)|\le c_6(\Gamma,B)\,\frac{1}{n^{1/2}},\quad z\in B.$$ The result of the theorem follows by combining (\[eq:pndecay-MSS\]) with (\[eq:pn-order-half\]).
Regarding sharpness of the exponent $\sigma$ of $n$ in (\[eq:pnG\]), we recall the following result of [@MSS p. 531]: If not all interior angles of $\Gamma$ are of the form $\pi/m$, $m\in\mathbb{N}$, and if we disregard in the definition of $s$ in (\[eq:pndecay-MSSs\]) angles of this form, should there exists, then for any $\varepsilon >0$, there is a subsequence $\mathcal{N}_\varepsilon\subset\mathbb{N}$, such that for any $n\in\mathcal{N}_\varepsilon$:$$|p_n(z)|\ge c_7(\Gamma,B)\,\frac{1}{n^{s+1/2+\varepsilon}},\quad z\in B.$$
\[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{}
[10]{}
L. V. Ahlfors, *Quasiconformal reflections*, Acta Math. **109** (1963), 291–301.
, *Lectures on quasiconformal mappings*, Manuscript prepared with the assistance of Clifford J. Earle, Jr. Van Nostrand Mathematical Studies, No. 10, D. Van Nostrand Co., Inc., Toronto, Ont.-New York-London, 1966.
V. V. Andrievskii, V. I. Belyi, and V. K. Dzjadyk, *Conformal invariants in constructive theory of functions of complex variable*, Advanced Series in Mathematical Science and Engineering, vol. 1, World Federation Publishers Company, Atlanta, GA, 1995.
V. V. Andrievskii and H.-P. Blatt, *Discrepancy of signed measures and polynomial approximation*, Springer Monographs in Mathematics, Springer-Verlag, New York, 2002.
W. E. Arnoldi, *The principle of minimized iteration in the solution of the matrix eigenvalue problem*, Quart. Appl. Math. **9** (1951), 17–29.
K. Astala, T. Iwaniec, and G. Martin, *Elliptic partial differential equations and quasiconformal mappings in the plane*, Princeton Mathematical Series, vol. 48, Princeton University Press, Princeton, NJ, 2009.
S. Axler, J. B. Conway, and G. McDonald, *Toeplitz operators on [B]{}ergman spaces*, Canad. J. Math. **34** (1982), no. 2, 466–483.
L. Baratchart, A. Mart[í]{}nez-Finkelshtein, D. Jimenez, D. S. Lubinsky, H. N. Mhaskar, I. Pritsker, M. Putinar, N. Stylianopoulos, V. Totik, P. Varju, and Y. Xu, *Open problems in constructive function theory*, Electron. Trans. Numer. Anal. **25** (2006), 511–525 (electronic).
V. I. Bely[ĭ]{}, *Conformal mappings and approximation of analytic functions in domains with quasiconformal boundary*, Math. USSR Sb. **31** (1977), 289–317.
A. B[ö]{}ttcher and S. M. Grudsky, *Spectral properties of banded [T]{}oeplitz matrices*, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2005.
T. Carleman, *Über die [A]{}pproximation analytisher [F]{}unktionen durch lineare [A]{}ggregate von vorgegebenen [P]{}otenzen*, Ark. Mat., Astr. Fys. **17** (1923), no. 9, 215–244.
J. Clunie, *On schlicht functions*, Ann. of Math. (2) **69** (1959), 511–519.
P. Davis and H. Pollak, *On the analytic continuation of mapping functions*, Trans. Amer. Math. Soc. **87** (1958), 198–225.
P. Deift and X. Zhou, *A steepest descent method for oscillatory [R]{}iemann-[H]{}ilbert problems. [A]{}symptotics for the [MK]{}d[V]{} equation*, Ann. of Math. (2) **137** (1993), no. 2, 295–368.
P. L. Duren, *Theory of [$H\sp{p}$]{} spaces*, Pure and Applied Mathematics, Vol. 38, Academic Press, New York, 1970.
, *Univalent functions*, Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], vol. 259, Springer-Verlag, New York, 1983.
B. R. Fabijonas and F. W. J. Olver, *On the reversion of an asymptotic expansion and the zeros of the [A]{}iry functions*, SIAM Rev. **41** (1999), no. 4, 762–773 (electronic).
A. S. Fokas, A. R. Its, and A. V. Kitaev, *Discrete [P]{}ainlevé equations and their appearance in quantum gravity*, Comm. Math. Phys. **142** (1991), no. 2, 313–344.
, *The isomonodromy approach to matrix models in [$2$]{}[D]{} quantum gravity*, Comm. Math. Phys. **147** (1992), no. 2, 395–430.
D. Gaier, *Lectures on complex approximation*, Birkhäuser Boston Inc., Boston, MA, 1987.
, *The [F]{}aber operator and its boundedness*, J. Approx. Theory **101** (1999), no. 2, 265–277.
, *On the decrease of [F]{}aber polynomials in domains with piecewise analytic boundary*, Analysis (Munich) **21** (2001), no. 2, 219–229.
W. B. Gragg and L. Reichel, *On the application of orthogonal polynomials to the iterative solution of linear systems of equations with indefinite or non-[H]{}ermitian matrices*, Linear Algebra Appl. **88/89** (1987), 349–371.
B. Gustafsson, C. He, P. Milanfar, and M. Putinar, *Reconstructing planar domains from their moments*, Inverse Problems **16** (2000), no. 4, 1053–1070.
B. Gustafsson, M. Putinar, E. Saff, and N. Stylianopoulos, *Bergman polynomials on an archipelago: Estimates, zeros and shape reconstruction*, Advances in Math. **222** (2009), 1405–1460.
P. Henrici, *Applied and computational complex analysis. [V]{}ol. 3*, Pure and Applied Mathematics (New York), John Wiley & Sons Inc., New York, 1986.
E. R. Johnston, *A study in polynomial approximation in the complex domain*, Ph.D. thesis, University of [M]{}innesota, March 1954.
D. Khavinson, *Remarks concerning boundary properties of analytic functions of [$E\sb{p}$]{}-classes*, Indiana Univ. Math. J. **31** (1982), no. 6, 779–787.
D. Khavinson and E. Lundberg, *The search for singularities of solutions to the [D]{}irichlet problem: [R]{}ecent developments*, CRM Proceedings and Lecture Notes **51** (2010), 121–132.
D. Khavinson and N. Stylianopoulos, *Recurrence relations for orthogonal polynomials and algebraicity of solutions of the [D]{}irichlet problem*, Around the research of [V]{}ladimir [M]{}az’ya. [II]{}, Int. Math. Ser. (N. Y.), vol. 12, Springer, New York, 2010, pp. 219–228.
R. S. Lehman, *Development of the mapping function at an analytic corner*, Pacific J. Math. **7** (1957), 1437–1449.
O. Lehto and K. I. Virtanen, *Quasiconformal mappings in the plane*, second ed., Springer-Verlag, New York, 1973.
D. S. Lubinsky, *A new approach to universality limits involving orthogonal polynomials*, Ann. of Math. (2) **170** (2009), no. 2, 915–939.
M. Lytrides and N. S. Stylianopoulos, *Error analysis of the [B]{}ergman kernel method with singular basis functions*, Comput. Methods Funct. Theory **11** (2011), no. 2, 487–526.
V. V. Maymeskul, E. B. Saff, and N. S. Stylianopoulos, *[$L\sp
2$]{}-approximations of power and logarithmic functions with applications to numerical conformal mapping*, Numer. Math. **91** (2002), no. 3, 503–542.
N. Papamichael and M. K. Warby, *Stability and convergence properties of [B]{}ergman kernel methods for numerical conformal mapping*, Numer. Math. **48** (1986), no. 6, 639–669.
Ch. Pommerenke, *Univalent functions*, Vandenhoeck & Ruprecht, Göttingen, 1975, With a chapter on quadratic differentials by Gerd Jensen, Studia Mathematica/Mathematische Lehrb[ü]{}cher, Band XXV.
, *Boundary behaviour of conformal maps*, Fundamental Principles of Mathematical Sciences, vol. 299, Springer-Verlag, Berlin, 1992.
M. Putinar and N. Stylianopoulos, *Finite-term relations for planar orthogonal polynomials*, Complex Anal. Oper. Theory **1** (2007), no. 3, 447–456.
P. C. Rosenbloom and S. E. Warschawski, *Approximation by polynomials*, Lectures on functions of a complex variable, The University of Michigan Press, Ann Arbor, 1955, pp. 287–302.
E. B. Saff, *Orthogonal polynomials from a complex perspective*, Orthogonal polynomials (Columbus, OH, 1989), Kluwer Acad. Publ., Dordrecht, 1990, pp. 363–393.
E. B. Saff and N. S. Stylianopoulos, *Asymptotics for [H]{}essenberg matrices for the [B]{}ergman shift operator on [J]{}ordan regions*, preprint.
E. B. Saff and V. Totik, *Logarithmic potentials with external fields*, Springer-Verlag, Berlin, 1997.
B. Simon, *Orthogonal polynomials on the unit circle. [P]{}art 1*, American Mathematical Society Colloquium Publications, vol. 54, American Mathematical Society, Providence, RI, 2005, Classical theory.
, *Orthogonal polynomials on the unit circle. [P]{}art 2*, American Mathematical Society Colloquium Publications, vol. 54, American Mathematical Society, Providence, RI, 2005, Spectral theory.
H. Stahl and V. Totik, *General orthogonal polynomials*, Cambridge University Press, Cambridge, 1992.
N. Stylianopoulos, *Strong asymptotics for [B]{}ergman polynomials over domains with corners*, C. R. Math. Acad. Sci. Paris **348** (2010), no. 1-2, 21–24.
P. K. Suetin, *Fundamental properties of polynomials orthogonal on a contour*, Uspehi Mat. Nauk **21** (1966), no. 2 (128), 41–88.
, *Polynomials orthogonal over a region and [B]{}ieberbach polynomials*, American Mathematical Society, Providence, R.I., 1974.
G. Szeg[ő]{}, *Orthogonal polynomials*, fourth ed., American Mathematical Society, Providence, R.I., 1975, American Mathematical Society, Colloquium Publications, Vol. XXIII.
J. M. Taylor, *The condition of [G]{}ram matrices and related problems*, Proc. Roy. Soc. Edinburgh Sect. A **80** (1978), no. 1-2, 45–56.
H. Widom, *Polynomials associated with measures in the complex plane*, J. Math. Mech. **16** (1967), 997–1013.
, *Extremal polynomials associated with a system of curves in the complex plane*, Advances in Math. **3** (1969), 127–232 (1969).
|
---
abstract: 'Containing many classic optimization problems, the family of vertex deletion problems has an important position in algorithm and complexity study. The celebrated result of Lewis and Yannakakis gives a complete dichotomy of their complexity. It however has nothing to say about the case when the input graph is also special. This paper initiates a systematic study of vertex deletion problems from one subclass of chordal graphs to another. We give polynomial-time algorithms or proofs of NP-completeness for most of the problems. In particular, we show that the vertex deletion problem from chordal graphs to interval graphs is NP-complete.'
title: Vertex Deletion Problems on Chordal Graphs
---
Introduction {#sec:intro}
============
Generally speaking, a vertex deletion problem asks to transform an input graph to a graph in a certain class by deleting a minimum number of vertices. Many classic optimization problems belong to the family of vertex deletion problems, and their algorithms and complexity have been intensively studied. For example, the clique problem and the independent set problem are nothing but the vertex deletion problems to complete graphs and to edgeless graphs respectively. Most interesting graph properties are *hereditary*: If a graph satisfies this property, then so does every induced subgraph of it. For all the vertex deletion problems to hereditary graph classes, Lewis and Yannakakis [@lewis-80-node-deletion-np] have settled their complexity once and for all with a dichotomy result: They are either NP-hard or trivial. Thereafter algorithmic efforts were mostly focused on the nontrivial ones, and the major approaches include approximation algorithms [@lund-93-approximation-maximum-subgraph], parameterized algorithms [@cai-96-hereditary-graph-modification], and exact algorithms [@fomin-16-exact-via-monotone-local-search].
Chordal graphs make one of the most important graph classes. Together with many of its subclasses, it has played important roles in the development of structural graph theory. (We defer their definitions to the next section.) Many algorithms have been developed for vertex deletion problems to chordal graphs and its subclasses,—most notably (unit) interval graphs, cluster graphs, and split graphs; see, e.g., [@fomin-15-large-induced-subgraphs; @bliznets-16-max-chordal-interval-subgraphs; @cao-16-chordal-editing; @cao-15-interval-deletion; @cao-17-unit-interval-editing; @cao-17-cluster-vertex-deletion; @cygan-13-split-vertex-deletion; @jansen-16-approximation-and-kernelization-chordal-deletion; @agrawal-16-chordal-deletion] for a partial list. After the long progress of algorithmic achievements, some natural questions arise: What is the complexity of transforming a chordal graph to a (unit) interval graph, a cluster graph, a split graph, or a member of some other subclass of chordal graphs? It is quite surprising that this type of problems has not been systematically studied, save few concrete results, e.g., the polynomial-time algorithms for the clique problem, the independent set problem, and the feedback vertex set problem (the object class being forests) [@gavril-72-coloring; @yannakakis-87-k-colorable-subgraph].
The same question can be asked for other pair of source and object graph classes. The most important source classes include planar graphs [@garey-76-simplified-problems; @garey-77-rectilinear-steiner-tree; @fomin-12-f-deletion], bipartite graphs [@yannakakis-81-bipartite-node-deletion], and degree-bounded graphs [@garey-79]. As one may expect, with special properties imposed on input graphs, the problems become easier, and some of them may not remain NP-hard. Unfortunately, a clear-cut answer to them seems very unlikely, since their complexity would depend upon both the source class and the object class. Indeed, some are trivial (e.g., vertex cover on split graphs), some remain NP-hard (e.g., vertex cover on planar graphs), while some others are in P but can only be solved by very nontrivial polynomial-time algorithms (e.g., vertex cover on bipartite graphs).
Throughout the paper we write the names of graph classes in [small capitals]{}; e.g., and stand for the class of chordal graphs and the class of bipartite graphs respectively. We use $\cal C$, commonly with subscripts, to denote an unspecified hereditary graph class, and use ${\cal C}_1 \rightarrow {\cal C}_2$ to denote the vertex deletion problem from class ${\cal C}_1$ to class ${\cal C}_2$:
> Given a graph $G$ in ${\cal C}_1$, one is asked for a minimum set $V_-\subseteq V(G)$ such that $G - V_-$ is in ${\cal C}_2$.
It is worth noting that ${\cal C}_2$ may or may not be a subclass of ${\cal C}_1$, and when it is not, the problem is equivalent to ${\cal C}_1 \rightarrow {\cal C}_1\cap {\cal C}_2$: Since ${\cal C}_1$ is hereditary, $G - V_-$ is necessarily in ${\cal C}_1$. For almost all classes $\cal C$, the complexity of problems [ $\rightarrow$ ]{}[$\cal C$]{} and [ $\rightarrow$ ]{}[$\cal C$]{} has been answered in a systematical manner [@lewis-80-node-deletion-np; @yannakakis-81-bipartite-node-deletion], while for most graph classes $\cal C$, the complexity of problem [ $\rightarrow$ ]{}[$\cal C$]{} has been satisfactorily determined [@garey-79].
Apart from , we would also consider vertex deletion problems on its subclasses. Therefore, our purpose in this paper is a focused study on the algorithms and complexity of ${\cal C}_1 \rightarrow {\cal C}_2$ with both ${\cal C}_1$ and ${\cal C}_2$ being subclasses of . Since it is generally acknowledged that the study of chordal graphs motivated the theory of perfect graphs [@hajnal-58-chordal-graphs; @berge-67-some-perfect-graphs], the importance of chordal graphs merits such a study from the aspect of structural graph theory. However, our main motivation is from the recent algorithmic progress in vertex deletion problems. It has come to our attention that to transform a graph to class ${\cal C}_1$, it is frequently convenient to first make it a member of another class ${\cal C}_2$ that contains ${\cal C}_1$ as a proper subclass, followed by an algorithm for the ${\cal C}_2 \rightarrow {\cal C}_1$ problem [@bevern-10-pivd; @cao-15-interval-deletion; @cao-16-almost-interval-recognition; @cao-17-cluster-vertex-deletion].
There being many subclasses of , the number of problems fitting in our scope is quite prohibitive. The following simple observations would save us a lot of efforts.
\[lem:general-polynomial\] Let ${\cal C}_1$ and ${\cal C}_2$ be two graph classes.
(1) If the ${\cal C}_1 \rightarrow {\cal C}_2$ problem can be solved in polynomial time, then so is ${\cal C} \rightarrow {\cal
C}_2$ for any subclass ${\cal C}$ of ${\cal C}_1$.
(2) If the ${\cal C}_1 \rightarrow {\cal C}_2$ problem is NP-complete, then so is ${\cal C} \rightarrow {\cal C}_2$ for any superclass ${\cal C}$ of ${\cal C}_1$.
For example, the majority of our hardness results for problems [ $\rightarrow$ ]{}[$\cal C$]{} are obtained by proving the hardness of [ $\rightarrow$ ]{}[$\cal C$]{}. Indeed, this is very natural as in literature, most (NP-)hardness of problems on chordal graphs is proved on split graphs, e.g., dominating set [@bertossi-84-domination-split-bipartite], Hamiltonian path [@muller-96-hamiltonian-chordal-bipartite], and maximum cut [@bodlaender-00-maximum-cut]. The most famous exception is probably the pathwidth problem, which can be solved in polynomial time on split graphs but becomes NP-complete on chordal graphs [@gustedt-93-pathwidth-chordal]. No problem like this surfaces during our study, though we do have the following hardness result proved directly on chordal graphs, for which we have no conclusion on split graphs.
\[thm:biconnected\] Let $F$ be a biconnected chordal graph. If $F$ is not complete, then the [ $\rightarrow$ ]{} problem is NP-complete.
Another simple observation of common use to us is about complement graph classes. The *complement* $\overline G$ of graph $G$ is defined on the same vertex set $V(G)$, where a pair of distinct vertices $u$ and $v$ is adjacent in $\overline G$ if $u v \not\in E(G)$. It is easy to see that the complement of $\overline G$ is $G$. In Figure \[fig:small-graphs\], for example, the net and the tent are the complements of each other. The *complement* of a graph class $\cal C$, denoted by $\overline{\cal C}$, comprises all graphs whose complements are in $\cal C$; e.g., the complement of is . A graph class $\cal C$ is *self-complementary* if it is its own complement, i.e., a graph $G \in \cal C$ if and only if $\overline G \in \cal C$. For example, both and are self-complementary.[^1] As usual, $n$ denotes the number of vertices in the input graph. Note that we need an $n^2$ item because it takes $O(n^2)$ time to compute the complement of a graph.
\[lem:complement\] Let ${\cal C}_1$ and ${\cal C}_2$ be two graph classes. If the ${\cal C}_1 \rightarrow {\cal C}_2$ problem can be solved in $f(n)$ time, then the $\overline {\cal C}_1 \rightarrow \overline{\cal C}_2$ problem can be solved in $O(f(n) + n^2)$ time.
We are now ready to summarize our results (besides Theorem \[thm:biconnected\]) in Figure \[fig:overview\].
(chordal) at (2,6.5) ; (split) at (-0.5,5) ; (interval) at (4.5,5) ; (threshold) at (-0.5,1.5) ; (tpg) at (1,3.5) [\
]{}; (uig) at (5, 3) ; (block) at (2.65, 3.5) ; (cluster) at (3,1.5) ; (csplit) at (-1.5,0) ; (bottom) at (1.5,0) ; (cochain) at (5,0) ; (csplit) – (threshold); (threshold) – (split) node \[midway,fill=white\] [NPC]{}; (split) – (chordal) node \[midway,fill=white\] [P]{}; (interval) – (chordal); (split) – (interval) node \[pos=.8,fill=white\] [NPC]{}; (uig) – (interval); (cluster) – (uig); (cluster) – (block); (block.north) – (chordal); (split) – (block.north) node \[pos=.3,fill=white\] [NPC]{}; (bottom) – (cluster); (bottom) edge (threshold); (split) – (bottom) node \[pos=.85, fill=white\] [P]{}; (split) – (csplit) node \[pos=.82, fill=white\] [P]{}; (interval) – (cluster) node \[pos=.2, fill=white\] [P]{}; (threshold) – (tpg) – (interval); (cochain) – (uig); (chordal) – (cochain) node \[pos=.85, fill=white\] [P]{};
Unfortunately, we have to leave the complexity of some problems open, particularly [ $\rightarrow$ ]{}, [ $\rightarrow$ ]{}, and [ $\rightarrow$ ]{}. Our final remarks are on the approximation algorithms, for which we are concerned with those not shown to be in P. All of them have constant-ratio approximations, which follow from either [@cao-16-almost-interval-recognition; @cao-17-unit-interval-editing] or the general observation of Lund and Yannakakis [@lund-93-approximation-maximum-subgraph]. On the other hand, none of the NP-complete problems admits a polynomial-time approximation scheme.
Preliminaries
=============
All graphs discussed in this paper are undirected and simple. A graph $G$ is given by its vertex set $V(G)$ and edge set $E(G)$, whose cardinalities will be denoted by $n$ and $m$ respectively. For a subset $X\subseteq V(G)$, denote by $G[X]$ the subgraph induced by $X$, and by $G - X$ the subgraph $G[V(G)\setminus X]$; we use $E(X)$ as a shorthand for $E(G[X])$, i.e., all edges among vertices in $X$. For a subset $E_-\subseteq E(G)$ of edges, we use $G - E_-$ to denote the subgraph with vertex set $V(G)$ and edge set $E(G)\setminus E_-$. We write $G - v$ and $G - e$ instead of $G - \{v\}$ and $G - \{e\}$ for $v\in V(G)$ and $e\in E(G)$ respectively.
For $\ell\ge 2$, we use $P_\ell$, $K_\ell$, and $I_\ell$ to denote an induced path, a clique, and an independent set, respectively, on $\ell$ vertices. For $\ell\ge 4$, we use $C_\ell$ to denote an induced cycle on $\ell$ vertices; such a cycle is also called a *hole*. Some small graphs that will be used in this paper are depicted in Figure \[fig:small-graphs\]. Note that $C_4$ and $2 K_2$ are complements to each other, while the complements of $P_4$ and $C_5$ are themselves.
We say that a graph $G$ *contains* a subgraph $F$ if $F$ is isomorphic to some induced subgraph of $G$. A graph is [*$F$-free*]{} if it does not contain $F$; for a set $\cal F$ of graphs, a graph $G$ is [*$\cal F$-free*]{} if it is [*$F$-free*]{} for every $F\in \cal F$. Each set $\cal F$ defines a hereditary graph class, and every hereditary graph class can be defined as such; in other words, for any hereditary graph class $\cal C$, there is a (possibly infinite) set $\cal F$ of subgraphs such that a graph $G\in \cal C$ if and only if it is $\cal F$-free. Each graph $F$ in $\cal F$ is usually assumed to be minimal, in the sense that $F$ is not in $\cal C$ but every proper induced subgraph of $F$ is; they are called the *minimal obstructions* of $\cal C$. One should note that a minimal obstruction of a graph class may not be a minimal obstruction of its subclass; e.g., the minimal obstruction $C_5$ of is not a minimal obstruction of , because $C_5$ contains the non-threshold graph $P_4$ as a proper induced subgraph.
The vertex deletion problem with object class $\cal C$ can also be defined as finding a maximum subgraph in the class $\cal C$. For example, both vertex cover and independent set refer to the vertex deletion problem to the class , which is exactly the $K_2$-free graphs. Although these formulations may behave different with respect to approximation, they are the same for our purpose. We may use both formulations interchangeably, dependent on which is more convenient in the context. Yet another way to view the vertex deletion problem toward property $\cal F$-free is to find a minimum set of vertices from a graph to hit all its induced subgraphs in $\cal F$.
We now define the graph classes we are going to study. For the convenience of the reader, we collect the obstructions of all the graph classes and their containment relationships in Figure \[fig:classes-containment\] of the appendix. Although the containment relationships of all the graph classes to be studied can be readily checked with their obstruction characterizations, sometimes it would be far more informative and inspiring if we look at them from the lens of the definitions and/or geometric representations of these graph classes.
A graph is *chordal* if every cycle of length larger than three has a chord, i.e., an edge between two non-consecutive vertices of the cycle. A graph is an *interval graph* if its vertices can be assigned to intervals on the real line such that there is an edge between two vertices if and only if their corresponding intervals intersect, and a *unit interval graph* if all the intervals have the same length. A graph $G$ is a *trivially perfect graph* if for every induced subgraph of $G$, the size of the largest independent set is equivalent to the number of all maximal cliques [@golumbic-78-trivially-perfect]. Chordal graphs are precisely graphs that are intersection graphs of subtrees of a tree, while interval graphs are intersection graphs of sub-paths of a path. Therefore, $\subset$ . A trivially perfect graph can be represented by a set of *non-overlapping* intervals; in other words, if two intervals intersect, then one is contained in the other. Therefore, $\subset$ .
A graph is a *cluster graph* if every component is a clique. A graph is a *block graph* if the deletion of all [cut vertices]{} leaves a cluster graph. It is known that a graph is $\{2 K_2, P_3\}$-free if it is a cluster graph of which at most one clique is nontrivial, i.e., having more than one vertex. It is immediate from their definitions that $\subset$ $\subset$ . Moreover, block graphs are precisely those chordal graph of which any two maximal cliques share at most one vertex.
A graph is a *split graph* if its vertices can be partitioned into a clique $C$ and an independent set $I$, and a *complete split graph* if every vertex in $C$ is adjacent to all vertices in $I$; we use $C\uplus I$ to denote the split partition. Note that either of the two sets may be empty. A graph $G$ is a *threshold graph* if there is a real number t, the so-called *threshold*, and an assignment $f: V(G)\to \mathbb{R}$ such that $u v\in E(G)$ if and only if $f(u) + f(v) \ge t$ [@chvatal-77-inequalities-in-ip]. It is easy to verify that $\subset$ $\subset$ : The first can be witnessed by $t = 1$ and assignment $f(v) = 1$ if $v\in C$ and $0$ otherwise; and the second by the clique partition $\{v: f(v) \ge t / 2\}\uplus \{v: f(v) < t / 2\}$. Further, if we order the vertices in the independent set $I$ of a threshold graph such that $$f(v_1) \le \cdots \le f(v_{|I|}) < t/2,$$ then $$N(v_1) \subseteq \cdots \subseteq N(v_{|I|}).$$ Likewise, there is an ordering of vertices $u_1$, $\ldots$, $u_{|C|}$ in $C$ such that $N[u_1] \subseteq \cdots \subseteq N[u_{|C|}]$.
The reader may have noticed the striking resemblance between [split]{} graphs and [bipartite]{} graphs. Indeed, if we add edges to make one side of a bipartite graph into a clique, we end with a split graph; or equivalently, given a split graph $G$ with split partition $C\uplus I$, the subgraph $G - E(C)$ is bipartite. Clearly, $G - E(C)$ is a complete bipartite graph if and only if $G$ is a complete split graph. If $G$ is a threshold graph, then $G - E(C)$ is a *chain graph* [@yannakakis-81-bipartite-node-deletion; @yannakakis-81-minimum-fill-in]. Finally, denotes the complement of .
Recall that Yannakakis [@yannakakis-81-bipartite-node-deletion] has given a dichotomy on the vertex deletion problem from bipartite graphs. Inspired by this and the aforementioned connection between bipartite graphs and split graphs, a natural attempt at problems [ $\rightarrow$ ]{}[$\cal C$]{} would be reducing them to the corresponding problem on bipartite graphs (for algorithms) or the other way (for hardness results). This approach however turns out to be less straightforward as one may expect.
The first trouble is that a split graph can have many different split partitions, and thus can be mapped to many different bipartite graphs. For instance, a naive reduction for the [ $\rightarrow$ ]{} problem is to the [ $\rightarrow$ ]{} problem, which can be solved in polynomial time.[^2] As shown in Figure \[fig:split-complete-split\], however, this reduction may end with a suboptimal solution. Some remarks on this example are worthwhile. The input graph in Figure \[fig:split-complete-split\] has a unique split partition. However, $G - v_3$, which is the unique optimal solution, has four split partitions, of which only one is complete. As we will see in the next section, this problem can still be solved efficiently by noticing that a split graph can have only a polynomial number of different split partitions, and all of them are very *similar*.
\(b) at (-0.5,0) ; (c) at (1.5,0) ; (v1) at (-1,2) ; (v2) at (0,2) ; (v3) at (1.5,2) ; (b) – (c); (c) – (v3) ; (v1) – (b) – (v2);
The situation becomes even more gloomy when we consider the transformation from bipartite graphs to split graphs. A bipartite graph can have an exponential number of bipartitions, and may be mapped to the same number of distinct split graphs. Consider, for example, an attempt to find a reduction from the [ $\rightarrow$ ]{} problem to problem [ $\rightarrow$ ]{}[$\cal C$]{} for some subclass $\cal C$ of . A diamond-free split graph admits a split partition $C\uplus I$ such that each vertex in $I$ has degree at most one. A natural candidate for $\cal C$ is the disjoint union of stars, for which the [ $\rightarrow$ ]{}[$\cal C$]{} problem is known to be NP-complete [@yannakakis-81-bipartite-node-deletion]. However, the naive reduction would not work: Given a bipartite graph that is a disjoint union of starts, if we take a wrong bipartition and add edges to make it a split graph, we may introduce many diamonds. As shown in Figure \[fig:bipartite-stars\], even connectedness, which imposes a unique bipartition, would not save us here.
\(b) at (-0.5,0) ; (v1) at (-1,2) ; (u1) at (0,2) ; (v1) – (b) – (u1);
(u2) at (-0.5,0) ; (v1) at (-0.5,2) ; (v2) at (0,2) ; (v1) – (u2) – (v2);
(u3) at (-0.5,0) ; (v1) at (-1,2) ; (v2) at (-0.5,2) ; (v1) – (u3) – (v2);
\(b) at (-0.5,0) ; (u4) at (-1,2) ; (v2) at (0,2) ; (u4) – (b) – (v2);
(v3) at (-0.5, 0) ; (u1) – (v3) – (u2) (u4) – (v3) – (u3);
Algorithmic results
===================
This section gives the polynomial-time algorithms. Our focus would be laid on the use of structural properties, and if possible, we would present the simplest algorithms without elaborating on the implementation details. These problems may have more efficient algorithms, and with more complex data structures and algorithmic finesses, some of them may even be solved in linear time. Our first two results are on split graphs, for which we need to put split partitions under scrutiny. Let $C\uplus I$ be a split partition of a split graph $G$. If some vertex in $I$ is completely adjacent to $C$, then we can move such a vertex $v$ to $C$ to make another split partition $C' = C\cup\{v\}$ and $I' = I\setminus\{v\}$. Note that the vertex $v$ may not be unique, and the resulting graphs by moving them would be isomorphic. Moreover, after such a move, no vertex of $I'$ can be completely adjacent to $C'$. The following proposition fully characterizes split graphs with more than one different split partition.
\[lem:split-partitions\] Let $G$ be a split graph with at least two split partitions, and let $C\uplus I$ and $C'\uplus I'$ be two different split partitions of $G$.
(i) The difference between $|C|$ and $|C'|$ is at most $1$.
(ii) If $|C| = |C'| + 1$, then $C$ is a maximum clique, and $I'$ is a maximum independent set of $G$; moreover, $C'\subset C$.
(iii) If $|C| = |C'|$, then $G - E(C)$ and $G - E(C')$ are isomorphic.
As a result, a split graph has either one or two essentially distinct split partitions. On the other hand, of all split partitions of a complete bipartite graph, only one, whose independent set is the largest, satisfies the definition of complete bipartite graphs, and we will exclusively refer to it when we are discussing a complete split graph.
Let $G$ be a split graph with split partition $C\uplus I$ and let $\underline G$ be a $\{2 K_2, P_3\}$-free subgraph of $G$. If $\underline G$ has edges, all of them must be in the same nontrivial clique. At most one vertex of this clique can be from $I$; therefore, all other vertices of $I$ either are deleted or become isolated in $\underline G$. In other words, for each other vertex $v$ in $I$, either $v$ or all its neighbors have to be deleted.
(0,0) node\[draw, text width=.8, rectangle, rounded corners, inner xsep=20pt, inner ysep=10pt\]
[Input]{}: a split graph $G$ on split partition $C\uplus I$.\
[Output]{}: a minimum set $V_-\subseteq V(G)$ such that $G - V_-$ is [$\{2 K_2, P_3\}$-free]{}.
AaaāaAĀaaMMMMMMAAAAAAAAAAAAAAAAAAAAAAAAAĀ 0. ${\cal S}\leftarrow \emptyset$;\
1. build a bipartite graph $G'$ by removing all edges among $C$ from $G$;\
2. find a minimum vertex cover of $G'$, and add it to ${\cal S}$;\
3. $v\in I$ [**do**]{}\
find a minimum vertex cover $X$ of $G' - (C\setminus N(v)) - v$;\
add $X \cup (C\setminus N(v))$ to $\cal S$;\
4. a set in $\cal S$ with the minimum cardinality.
;
\[thm:split-cluster\] The [ $\rightarrow$ ]{} problem is in P.
Let $G$ be the input graph to the [ $\rightarrow$ ]{} problem and let $C\uplus I$ be a split partition of $G$. We use the algorithm in Figure \[fig:split-cluster\] to find a minimum solution to $G$. To argue its correctness, we show that (i) every set in $\cal S$, added in step 2 or 3, is a solution to $G$, and (ii) at least one of them is minimum. For (i), it is easy to verify that any vertex cover of $G' = G - E(C)$ is a solution: There is no edge between $C$ and $I$ after its deletion. The situation in step 3 is similar; note that $N[v] \uplus (I\setminus \{v\})$ is a split partition of $G - \big( C\setminus N(v) \big)$.
Let $V_-$ be a minimum solution to $G$. In the first case, every vertex $v\in I\setminus V_-$ is isolated in $G - V_-$. In other words, $V_-$ contains a vertex cover of $G' = G - E(C)$, and then the solution found by step 2 is already the minimum. Henceforth we assume that there exists a vertex $v\in I\setminus V_-$ such that $N(v)\not\subseteq V_-$. Since any vertex $u\in N(v)$ and $w\in C\setminus N(v)$ induce a $P_3$ with $v$, in this case all vertices in $C\setminus N(v)$ must be in $V_-$. Note that the vertex $v$ is unique: If two vertices in $I\setminus V_-$ have neighbors in $C\setminus V_-$, then they are in a non-clique component. Therefore, after removing $C\setminus N(v)$ and $v$ from the graph, it reduces to the first case. This justifies step 3.
The algorithm makes $O(n)$ calls to an algorithm for the bipartite vertex cover problem, each taking $O(m\sqrt{n})$ time, and hence the whole algorithm runs in $O(m n \sqrt{n})$ time.
Noting that $\cap$ is precisely , we can apply the algorithm of Theorem \[thm:split-cluster\] to the [ $\rightarrow$ ]{} problem. Moreover, since is self-complementary, while the complement of is , it follows from Proposition \[lem:complement\] that the [ $\rightarrow$ ]{} problem is also in P.
\[lem:split-complete-split\] Problems [ $\rightarrow$ ]{} and [ $\rightarrow$ ]{} are in $P$.
A similar observation as that of the proof Theorem \[thm:split-cluster\] can be used to solve the [ $\rightarrow$ ]{} problem. We start from a simple property of connected graphs in $\cap$ .
(2, 0) to \[“$u_\ell$” below\] (4.5, 0); (4, 0.8) to \[“$u_r$” xshift=8mm\] (6.5, 0.8); (2.25, 0.2) to (4.75,0.2); (2.75,0.4) to (5.25,0.4); (3.5,0.6) to (6,0.6); (0,0.5) to \[“$v_1$”\] (2.5,0.5); (6,0) to \[“$v_2$”\] (8.5,0); (3,1) to \[“$v$”\] (5.5,1);
\[lem:split+uig\] Let $G$ be a connected split graph and let $C\uplus I$ be a split partition of $G$. If $G$ is a unit interval graph, then $|I| \le 3$, and the equality holds only when there is a vertex $v\in I$ adjacent to all vertices in $C$.
We prove $|I| \le 2$ if $C$ is a maximum clique of $G$, and then the proposition follows from Proposition \[lem:split-partitions\](i). Let $u_\ell$ and $u_r$ be the vertices in $C$ with respectively the leftmost and rightmost intervals. Suppose for contradiction $|I| > 2$. Let $v_1$ and $v_2$ be the vertices in $I$ with respectively the leftmost and rightmost intervals. Then ${\ensuremath{{\mathtt{lp}(u_\ell)}}} < {\ensuremath{{\mathtt{rp}(v_1)}}} < {\ensuremath{{\mathtt{lp}(u_r)}}} < {\ensuremath{{\mathtt{rp}(u_\ell)}}} < {\ensuremath{{\mathtt{lp}(v_2)}}} < {\ensuremath{{\mathtt{rp}(u_r)}}}$, where the second and the fourth inequalities follow from that $C$ is a maximum clique, and the others from the selections of the four vertices. Since $G$ is connected, the interval for any other vertex $v$ in $I\setminus \{v_1, v_2\}$, which is nonempty, has to lie in (${\ensuremath{{\mathtt{rp}(v_1)}}}, {\ensuremath{{\mathtt{lp}(v_2)}}}$). But then it has to contain $[{\ensuremath{{\mathtt{lp}(u_r)}}}, {\ensuremath{{\mathtt{rp}(u_\ell)}}}]$, and $\{v\}\cup C$ is a clique, contradicting that $C$ is a maximum clique of $G$.
Similar as Theorem \[thm:split-cluster\], our algorithm for [ $\rightarrow$ ]{} separates into two cases, based on whether there is a vertex of $I\setminus V_-$ adjacent to all vertices in $C\setminus V_-$.
(0,0) node\[draw, text width=.8, rectangle, rounded corners, inner xsep=20pt, inner ysep=10pt\]
[Input]{}: a split graph $G$ on split partition $C\uplus I$.\
[Output]{}: a minimum set $V_-\subseteq V(G)$ such that $G - V_-$ is a unit interval graph.
AaaāaAĀaaMMMMMMAAAAAAAAAAAAAAAAAAAAAAAAAĀ 0. ${\cal S}\leftarrow \emptyset$;\
1. solve the [ $\rightarrow$ ]{} problem on $G$; add the solution to ${\cal S}$;\
[\\\\[***case 1:***]{}]{}\
2. $v\in I$ [**do**]{}\
find a minimum vertex cover of $G - v - E(C)$, and add it to ${\cal S}$;\
3. $v_1, v_2\in I$ [**do**]{}\
3.1. $G' \leftarrow G - \{v_1, v_2\} - E(C)$;\
3.2. find a minimum vertex cover of $G' - N(v_1)\cap N(v_2)$,\
and add its union with $N(v_1)\cap N(v_2)$ to ${\cal S}$;\
3.3. find a minimum vertex cover of $G' - C\setminus \big( N(v_1)\cup N(v_2) \big)$,\
and add its union with $C\setminus \big( N(v_1)\cup N(v_2) \big)$ to ${\cal S}$;\
[\\\\[***case 2:***]{}]{}\
4. $v\in I$ [**do**]{}\
$G'' \leftarrow G - \big(C\setminus N(v) \big)$ with split partition $N[v]$ and $I\setminus \{v\}$;\
solve $G''$ as case 1, but append $C\setminus N(v)$ to each solution found;\
5. a set in $\cal S$ with the minimum cardinality.
;
\[split-uig\] The [ $\rightarrow$ ]{} problem is in $P$.
Let $G$ be the input graph to the [ $\rightarrow$ ]{} problem and let $C\uplus I$ be a split partition of $G$. We use the algorithm in Figure \[fig:split-uig\] to find a solution. To argue its correctness, we show that all sets put into $\cal S$ in steps 1–4 are solutions to $G$, and at least one of them is minimum. It is clear for step 1. After the deletion of a solution found in step 2, only $v$ in $I$ remains adjacent to the remaining vertices of $C$. In step 3, only $v_1$ and $v_2$ from $I$ can remain adjacent to vertices in $C$. In step 3.2, no vertex in $C$ is adjacent to both $v_1$ and $v_2$; in step 3.3, every vertex in $C$ is adjacent to at least one of $v_1$ and $v_2$. In either case, it is easy to verify that the graph is a unit interval graph by building a unit interval model directly. Step 4 follows from the same argument as above: After the deletion of $C\setminus N(v)$, it reduces to one of the three previous steps.
Let $V_-$ be a minimum solution to $G$. If $G - V_-$ is $\{2K_2, P_3\}$-free, then the solution found by step 1 is the minimum. Henceforth we assume that $G - V_-$ contains a non-clique component $U$; note that such a component contains all vertices in $C\setminus V_-$ and hence is unique.
In the first case, every vertex $v\in U\cap I$ has at least one non-neighbor in $C\setminus V_-$, i.e., $N(v) \setminus V_-\subset C\setminus V_-$. According to Proposition \[lem:split+uig\], $|U\cap I| \le 2$. If $U\cap I = \{v\}$, then $G - (V_- \cup \{v\})$ is $\{2 K_2, P_3\}$-free and the only nontrivial clique $U\setminus\{v\}$ is a subset of $C$; hence step 2 always find a minimum solution. In the rest of this case, $U\cap I$ has two different vertices; let them be $v_1$ and $v_2$. Since any $u_1\in N(v_1)\cap N(v_2)$ and $u_2\in C\setminus \big( N(v_1)\cup N(v_2) \big)$ induce a claw with $\{v_1, v_2\}$, at least one of the two sets needs to be empty or completely contained in $V_-$. Steps 3.2 and 3.3 take care of these two situations separately.
We are now in the second case, where $C\setminus V_-\subseteq N(v)$ for some vertex $v\in I\setminus V_-$; in other words, $V_-$ contains all vertices in $C\setminus N(v)$. There might be two of such vertices, when we can take $v$ to be either of them. Clearly, $N[v]$ and $I\setminus \{v\}$ is then a split partition of $G'' = G - \big(C\setminus N(v) \big)$, which has a solution $V_-\setminus \big(C\setminus N(v) \big)$. Moreover, under this new split partition, we reduce it to the first case.
The algorithm makes $O(n^3)$ calls to the algorithm for the bipartite vertex cover problem, each taking $O(m\sqrt{n})$ time, and hence the whole algorithm runs in $O(m n^{3.5})$ time.
We now turn to problems whose inputs are interval graphs, for which we rely on interval models. Recall that an interval model for an interval graph is a set of intervals representing its vertices. In this paper, all intervals are closed. An interval model can be specified by the $2 n$ endpoints for the $n$ intervals, the interval for vertex $v$ being by $[{\ensuremath{{\mathtt{lp}(v)}}}, {\ensuremath{{\mathtt{rp}(v)}}}]$.
For the [ $\rightarrow$ ]{} problem, the clique is from some maximal clique of the input graph $G$ and can be enumerated. On the other hand, according to Proposition \[lem:split+uig\], there are at most three vertices in the independent set, which can be easily found. However, for interval graphs, it can be more complicated.
(.8, 0) – (24, 0); (0.8,4.5) – (24,4.5);
(1.5,1.15) to (10.5,1.15); (1.6,1.35) to (14,1.35); (1.7,1.55) to (14.5,1.55); (3.5,1.75) to (16.5,1.75); (5,1.95) to (17.4,1.95); (8,2.15) to (20.5,2.15); (11.5,1.15) to (21.5,1.15);
in [1, ..., 5]{}[ ( .1, .15) to (20 + .1, .15); ]{}
//łin [1/1/2, 2/2/2.5, 4/1/2.5, 6/2/4, 7./1/1.5, 9.5/1/2.5, 11/2/2.5, 13/1/2.5, 15/2/2.5, 17/1/2.5, 19/2/2.5, 21/1/2.5]{}[ in [0, ..., 4]{}[ (+ .05, + .15) to (+ ł+ .05, + .15); ]{} ]{} (1, 1) to \[“$v_\ell$” below\] (3, 1); (2.2, 2.6) to (4.7, 2.6); (7., 1) – (8.5, 1); (7.2, 1.6) – (8.7 , 1.6); (9.5, 1) to (12, 1); (11.2, 2.6) – (13.7, 2.6); (15, 2) – (17.5, 2); (17.2, 1.6) – (19.7, 1.6); (21, 1) to \[“$v_r$” below\] (23.5, 1); (21.2, 1.6) – (23.7, 1.6);
(1,2.4) to (1,4.4) node\[above\] [$\alpha_1$]{}; (4.7,4) to (4.7,4.4) node\[above\] [$\beta_1$]{}; (7,2.4) to (7,4.4) node\[above\] [$\alpha_2$]{}; (8.7,3) to (8.7,4.4) node\[above\] [$\beta_2$]{}; (9.5,2.4) to (9.5,4.4) node\[above\] [$\alpha_3$]{}; (13.7,4) to (13.7,4.4) node\[above\] [$\beta_3$]{}; (15,3.4) to (15,4.4) node\[above\] [$\alpha_4$]{}; (19.7,3) to (19.7,4.4) node\[above\] [$\beta_4$]{}; (21,2.4) to (21,4.4) node\[above\] [$\alpha_5$]{}; (23.7,3) to (23.7,4.4) node\[above\] [$\beta_5$]{};
(3,2.4) to (3, 0) node\[below\] [$\alpha$]{}; (21,2.4) to (21, 0) node\[below\] [$\beta$]{};
Problems [ $\rightarrow$ ]{} and [ $\rightarrow$ ]{} are in P.
We solve both problems by finding the maximum subgraphs, for which we work on interval models. Let us fix an interval model for the input graph $G$; we may assume without loss of generality that no distinct intervals can share an endpoint.
For the [ $\rightarrow$ ]{} problem, we consider a maximum complete split subgraph $G[U]$. It is trivial if $G[U]$ is a clique; hence we assume otherwise. Let $C\uplus I$ be the split partition of $G[U]$, and let $$\alpha = {\ensuremath{{\mathtt{rp}(v_\ell)}}} = \min_{v\in I}{\ensuremath{{\mathtt{rp}(v)}}}\text{ and }\beta = {\ensuremath{{\mathtt{lp}(v_r)}}} = \max_{v\in I}{\ensuremath{{\mathtt{lp}(v)}}}.$$ Note that $|I| \ge 2$, as otherwise $G[U]$ is a clique; hence $v_\ell\ne v_r$ and $\alpha < \beta$. See Figure \[fig:interval-cluster\]. It is easy to see that a vertex is in $C$ if and only if its interval fully contains $[\alpha, \beta]$; on the other hand, the maximality of $U$ requires us to take all such vertices. The independent set $I$ would then consists of $v_\ell, v_r$, and a maximum independent set of the subgraph induced by intervals satisfying $\alpha < {\ensuremath{{\mathtt{lp}(v)}}} < {\ensuremath{{\mathtt{rp}(v)}}} < \beta$. There are $O(n^2)$ pairs of indices to enumerate, and for each pair, both the clique and a maximum independent set can be found in $O(n)$ time. The whole algorithm runs in $O(n^3)$ time.
We now consider the [ $\rightarrow$ ]{} problem. Suppose that $G[U]$ is a maximum cluster subgraph of $G$ and that it has $k$ cliques. For the $i$th clique $B_i$, we can find two endpoints $$\alpha_i = \min_{v\in B_i} {\ensuremath{{\mathtt{lp}(v)}}} \text{ and } \beta_i = \max_{v\in B_i} {\ensuremath{{\mathtt{rp}(v)}}}.$$ Then all intervals for vertices in $B_i$ are completely contained in the interval $[\alpha_i, \beta_i]$. The $k$ intervals defined as such are pairwise disjoint: There cannot be edges between two cliques in $G[U]$. Therefore, $B_i$ must be a maximum clique in the subgraphs induced by $\{v: \alpha_i \le {\ensuremath{{\mathtt{lp}(v)}}} < {\ensuremath{{\mathtt{rp}(v)}}} \le \beta_i\}$, which can be found easily. See Figure \[fig:interval-cluster\]. The problem can thus be reduced to find the $k$ pairs of endpoints $\alpha_i$ and $\beta_i$.
We build another weighted interval model as follows. For each ${\ensuremath{{\mathtt{lp}(v_\ell)}}}$ and each ${\ensuremath{{\mathtt{rp}(v_r)}}}$ with ${\ensuremath{{\mathtt{lp}(v_\ell)}}} < {\ensuremath{{\mathtt{rp}(v_r)}}}$, possibly $v_\ell = v_r$, we add an interval $[{\ensuremath{{\mathtt{lp}(v_\ell)}}}, {\ensuremath{{\mathtt{rp}(v_r)}}}]$, whose weight is set to be the size of maximum cliques in the subgraphs induced by $\{v: {\ensuremath{{\mathtt{lp}(v_\ell)}}} \le {\ensuremath{{\mathtt{lp}(v)}}} < {\ensuremath{{\mathtt{rp}(v)}}} \le {\ensuremath{{\mathtt{rp}(v_r)}}}\}$. We then find a set of pairwise disjoint intervals with the maximum weight sum (or equivalently, a maximum-weight independent set of the weighted interval graph represented by the new interval model). All the steps can be done in polynomial time.
It is easy to verify the following greedy algorithm solves the [ $\rightarrow$ ]{} problem. We root the input graph at an arbitrary vertex, and work on any leaf at the lowest level: If it has siblings (i.e., its parent has degree larger than $2$), then delete its parent and put it into the solution; otherwise the parent of its parent. As we see below, a similar idea would enable us to solve the [ $\rightarrow$ ]{} problem. Recall that a *block* (also known as biconnected component) of a graph $G$ is a maximal biconnected subgraph of $G$. The *block-cut tree* of a block graph has a vertex for each block and for each cut vertex, and an edge for each pair of a block and a cut vertex that belongs to that block. Note that every block of a block graph is a clique.
\[thm:interval-cluster\] The [ $\rightarrow$ ]{} problem is in P.
We construct the block-cut tree $T$ of the input graph $G$. A cut vertex $v$ of $G$ is denoted by the same label in $T$, while for a block vertex $u$ of $T$, we use $B(u)$ to denote the vertices in the block of $G$. We arbitrarily root $T$ at some block vertex. Note that all leaves of $T$ are block vertices, and their neighbors are not; this invariant will be maintained during our algorithm. Until the tree becomes empty, the algorithm always picks a leaf vertex $u$ at the lowest level. Let $v$ be its parent. If $v$ has other children, we remove $v$ and its children from $T$ and put $v$ in the solution $V_-$. In the rest $u$ is the only child of $v$; let $u'$ be the parent of $v$, and let $v'$ be the parent of $u'$. If at least one vertex in the clique $B(u')$ is not a cut vertex, then we remove $v, u$ from $T$ and put $v$ in $V_-$. Otherwise, we remove the subtree rooted at $v'$ from $T$; we put $B(u')\setminus \{v\}$ into the solution, and for each other child $u_i$ of $v'$ that is not a leaf, we solve the subgraph induced by $B(u_i)$ and its children. The correctness is quite straightforward, so we omit here.
The last three problems are from chordal graphs.
\[thm:chordal-co-chain\] The [ $\rightarrow$ ]{} problem is in P.
The vertices of a [co-chain]{} graph can be partitioned into two cliques. On the other hand, any two maximal cliques of a chordal graph together induce a co-chain graph. Therefore, the problem is to find two maximal cliques with the maximum cardinality together. Since a chordal graph has at most $n$ maximal cliques, It can be easily calculated in $O(n^2)$ time.
For any $p > 1$, the [ $\rightarrow$ ]{} problem is in P.
It is known that a chordal graph is $K_p$-free if and only if it has treewidth at most $p - 2$. Thus the problem is to find an induced subgraph of treewidth at most $p - 2$ with the maximum number of vertices. It is known that such a problem can be solved in polynomial time for chordal graphs [@yannakakis-87-k-colorable-subgraph]: Note that a chordal graph is $K_p$-free if and only if it can be colored by $p-1$ colors.
The [ $\rightarrow$ ]{} problem is in P.
We remark that Theorem \[thm:split-cluster\]–\[thm:interval-cluster\] can be adapted for the weighted versions of the problems.
Hardness {#sec:hardness}
========
We now turn to hardness results. Here the problems should be understood to be their decision versions: The input includes, apart from a graph $G$ from ${\cal C}_1$, a positive integer $k$, and the problem is to decide whether $G$ can be made a graph in ${\cal C}_2$ by deleting at most $k$ vertices. All of them are in NP because all the concerned graph classes can be recognized in polynomial time. Our first hardness result, on [ $\rightarrow$ ]{}, follows easily from the results of Yannakakis [@yannakakis-81-bipartite-node-deletion] on bipartite graphs. Recall that a bipartite graph is not a chain graph if and only if it contains some $2 K_2$, and a split graph is not a threshold graph if and only if it contains some $P_4$.
The [ $\rightarrow$ ]{} problem is NP-complete.
Let $G$ be a bipartite graph with partition $C$ and $I$. We add all possible edges among $C$ to make it a clique. Let $G'$ be the resulting graph, which is clearly a split graph, witnessed by the split partition $C\uplus I$. We argue for every vertex set $U$ that $G[U]$ is a chain graph, i.e., being $2
K_2$-free, if and only if $G'[U]$ is a threshold graph, i.e., being $P_4$-free. Let $X$ be any set of four vertices. If $G[X]$ is $2 K_2$, then $|X\cap C| = |X\cap I| = 2$, but then $G'[X]$ would be a $P_4$. The other direction can be argued similarly. Since the [ $\rightarrow$ ]{} problem is NP-hard [@yannakakis-81-bipartite-node-deletion], the lemma follows.
Recall that every threshold graph is an interval graph, and this can be generalized as follows. Let $G_1$ and $G_2$ be two threshold graphs with split partitions $C\uplus I$ and $C'\uplus I'$ respectively. We let $G_1 \bowtie_{(C, C')} G_2$, or simply $G_1 \bowtie G'$ as in the rest of the paper the partitions are always clear from context, denote the graph obtained from them by adding all possible edges between $C$ and $C'$—i.e., its vertex set and edge set are $V(G_1)\cup V(G_2)$ and $E(G_1)\cup E(G_2)\cup (C\times C' )$ respectively. This is clearly a split graph with split partition $C \cup C'$ and $I \cup I'$. One can verify that $G_1 \bowtie G_2$ is also an interval graph by their obstructions as follows. A split graph that is not an interval graph has to contain a tent, a net, or a rising sun (see Figure \[fig:small-graphs\]). Each of them has three independent vertices, which have to be from $I\cup I'$, but a quick inspection of these three graphs will convince us that this cannot be possible.
\[lem:merge-threshold\] For any two threshold graphs $G_1$ and $G_2$, the graph $G_1 \bowtie
G_2$ is an interval graph.
A better way to look at Proposition \[lem:merge-threshold\] is probably through interval models.[^3] Let $G$ be a threshold graph with split partition $C\uplus I$, and let vertices in $I$ be ordered in a way that $N(v_1) \subseteq N(v_2)
\subseteq \cdots \subseteq N(v_{|I|}) $. We can build an interval model for $G$ by setting intervals $$\begin{aligned}
[i, i + 0.5] &&\text{ for every } v_i\in I,
\notag
\\
\label{eq:threshold}
\big [\min \{i : v_i\in N(v)\}, |I| + 2 \big] &&\text{ for every }
v\in N(I), \text{ and}
\\
\notag
\big[|I| + 1, |I| + 2 \big] &&\text{ otherwise }(i.e., v\in C\setminus
N(I)).\end{aligned}$$ See Figure \[fig:threshold-model\] for illustration.
(1.5, 0) – (11.7,0) (14,0) – (23,0); (11.7, 0) – (14, 0);
in [1, ..., 5]{}[ ( 2, 4.8) to \[“$v_{\scriptsize\x}$”\] (2 + 1.3, 4.8); (2 , 0) edge (2, 0.2) node\[below\] [$\x$]{}; ]{} at (12.7, 4.8) [$\cdots\cdots$]{}; /in [7/|I| - 2, 8/|I| - 1, 9/|I|]{}[ ( 2, 4.8) to \[“$v_{\n}$”\] (2 + 1.3, 4.8); (2 , 0) edge (2, 0.2) node\[below\] [$\n$]{}; ]{} (20 , 0) edge (20, 0.2) node\[below\] [$|I| + 1$]{}; (22 , 0) edge (22, 0.2) node\[below\] [$|I| + 2$]{};
/in [1/2, 2/4, 3/4, 4/8, 5/10, 6/10, 7/10]{}[ (, .5 + 0.3) – (11.7, .5 + 0.3); (11.7, .5 + 0.3) – (13.5, .5 + 0.3); (13.5, .5 + 0.3) – (22, .5 + 0.3); ]{} at (21, 3.3) [**$\cdots$**]{}; (18,4.) – (22,4.); (20,4.3) – (22,4.3); (20,4.6) – (22,4.6);
An interval model for $G_1\bowtie G_2$ can be built from the interval models for $G_1$ and $G_2$ by (i) keeping the intervals for $G_1$, and (ii) setting the interval to be $\big[ |I|+ |I'| + 3 - {\ensuremath{{\mathtt{rp}(v)}}}, |I|+ |I'| + 3 - {\ensuremath{{\mathtt{lp}(v)}}} \big]$ for each $v\in V(G_2)$. See Figure \[fig:joint-threshold\].
(1.5, 0) – (11.7,0) (14,0) – (23,0); (11.7, 0) – (14, 0);
in [1, ..., 5]{}[ ( 2, 4.8) to (2 + 1.3, 4.8); (2 , 0) edge (2, 0.2) node\[below\] [$\x$]{}; ]{} at (12.7, 4.8) [$\cdots$]{}; /in [7/|I| - 2, 8/, 9/|I|]{}[ ( 2, 4.8) to (2 + 1.3, 4.8); (2 , 0) edge (2, 0.2) node\[below\] [$\n$]{}; ]{} (20 , 0) edge (20, 0.2); (22 , 0) edge (22, 0.2) node\[below\] [$|I| + 2$]{};
/in [1/2, 2/4, 3/4, 4/8, 5/10, 6/10, 7/10]{}[ (, .5 + 0.3) – (11.7, .5 + 0.3); (11.7, .5 + 0.3) – (13.5, .5 + 0.3); (13.5, .5 + 0.3) – (22, .5 + 0.3); ]{} at (21, 3.3) [**$\cdots$**]{}; (18,4.) – (22,4.); (20,4.3) – (22,4.3); (20,4.6) – (22,4.6);
(1.5, 0) – (11.7,0) (14,0) – (23,0); (11.7, 0) – (14, 0);
in [1, ..., 5]{}[ ( 2, 4.8) to (2 + 1.3, 4.8); ]{} (2 , 0) edge (2, -0.2) node\[below\] [$|I| + |I'| + 2$]{}; at (12.7, 4.8) [$\cdots$]{}; /in [7/|I| - 2, 8/|I| - 1, 9/|I|]{}[ ( 2, 4.8) to (2 + 1.3, 4.8); ]{}
/in [1/2, 2/4, 3/4, 4/8, 5/10, 6/10, 7/10]{}[ (, .5 + 0.3) – (11.7, .5 + 0.3); (11.7, .5 + 0.3) – (13.5, .5 + 0.3); (13.5, .5 + 0.3) – (22, .5 + 0.3); ]{} at (21, 3.3) [**$\cdots$**]{}; (18,4.) – (22,4.); (20,4.3) – (22,4.3); (20,4.6) – (22,4.6);
We are now ready to prove the first major theorem of this section.
\[thm:chordal-to-interval\] The [ $\rightarrow$ ]{} problem is NP-complete.
It is clear that the problem is in NP. Let $G$ be a split graph with split partition $C\uplus I$. We take a complete split graph $G'$ with split partition $C'\uplus I'$, where $|C'| = |I'| = |C|$, and let $H = G\bowtie G'$. We argue that ($G, k$) is a yes-instance of the [ $\rightarrow$ ]{} problem if and only if ($H, k$) is a yes-instance of the [ $\rightarrow$ ]{} problem. Since both problems are trivial yes-instances when $k \ge |C|$, we may assume henceforth $k < |C|$.
Suppose that $G - V_-$, where $|V_-| \le k$, is a threshold graph. According to Proposition \[lem:merge-threshold\], $( G - V_-) \bowtie G'$ is an interval graph. It is the same graph as $H - V_-$. Therefore, $V_-$ is a solution of ($H, k$). This verifies the only if direction.
Now suppose that $H - V_-$, where $|V_-| \le k$, is an interval graph. Suppose for contradiction that $\underline G = G - \big( V_-\cap V(G) \big)$ is not a threshold graph. Then $\underline G$ must contain some $P_4$; let it be $v_1 u_1 u_2 v_2$. Since $\underline G$ is a split graph, we must have $u_1, u_2\in C$ and $v_1, v_2\in I$. On the other hand, by the assumption $k < |C|$, neither $C'\setminus V_-$ nor $I'\setminus V_-$ can be empty. Let $u\in C'\setminus V_-$ and $v\in I'\setminus V_-$. By the construction, the only edges between $\{u, v\}$ and $\{v_1, u_1, u_2, v_2\}$ are $u u_1$ and $u u_2$, but then these six vertices together induce a net in $H - V_-$, a contradiction.
The [ $\rightarrow$ ]{} problem is NP-complete.
The last result is on the deletion of any biconnected subgraph from chordal graphs. Recall that a vertex $v$ is *simplicial* in $G$ if $N[v]$ is a clique. A graph is chordal if and only if we can make it empty by deleting simplicial vertices in the remaining graph [@dirac-61-chordal-graphs].
\[thm:biconnected-2\] Let $F$ be a biconnected chordal graph. If $F$ is not complete, then the [ $\rightarrow$ ]{} problem is NP-complete. Moreover, if $F$ is a complete split graph with $|C| = 2$ and $|I| \ge 2$, then the [ $\rightarrow$ ]{} problem is NP-complete.
We use the following reduction from the vertex cover problem. Let $G$ be an input graph to the vertex cover problem, we conduct the following operations.
1. For each edge $uv \in E(G)$, add a distinct copy of $F$ such that each of them uses $u v$ as one of its edges. We say that $u, v$ are the attachments for this copy of $F$.
2. Add all possible edges among $V(G)$ to make it complete.
Let $G'$ be the obtained graph. To see that $G'$ is chordal, we give an explicit way of eliminating simplicial vertices to make $G'$ empty. A chordal graph either is a clique or contains two nonadjacent simplicial vertices; all vertices are simplicial when it is a clique. For each copy of $F$, we can find a simplicial vertex in $V(F)\setminus \{u, v\}$. We keep doing this, and then only vertices in $V(G)$ remain. They have been made a clique, and thus all of them simplicial.
We argue that $G$ has a vertex cover of size $k$ if and only if we can delete $k$ vertices from $G'$ to make it $F$-free. The following fact would be essential. We consider any copy $X$ of $F$ with attachments $u$ and $v$. If we delete $u$ or $v$, then the other becomes a cut vertex, and $X\setminus \{u, v\}$ are in different blocks from other vertices of $V(G')$. But any other copy of $F$, if it exists, must be completely contained in a block, and thus it cannot contains any vertex in $X$.
Suppose that $V_-$ is a vertex cover of size $k$ in $G$. We claim that $\underline G = G' - V_-$ has no copy of $F$. For each copy of $F$ with attachments $u$ and $v$. Therefore, a copy of $F$ in $\underline G$, if one exists, has all its vertices from $V(G)$.[ But this is not possible because $F$ is not a clique.]{}
Suppose now that $V_-$ is a solution to $G'$ of size $k$. We may assume that $V_-$ contains no new vertex: If it contains a vertex from a copy of $F$ with attachments $u$ and $v$, we can replace it by $u$. (Note that the new set remains a solution to $G'$ because the aforementioned fact.) Since $G' - V_-$ does not contain $F$, each copy of it has at least one of the attachments in $V_-$. Therefore, each edge of $G$, at lest one end is in $V_-$, which means that $V_-$ is a vertex cover of $G$.
(v1) at (2,2) ; (v2) at (-2,2) ; (v3) at (-2,-2) ; (v4) at (2,-2) ; (v2)– (v3)– (v4)–(v1)– (v2); (v2)– (v4) (v3)– (v1);
//in [1/2, 2/3, 3/4, 4/1]{}
\(c) at (0, 1) ; (u2) at (0, 3) ; (u3) at (2,3.5) ; (u1) at (-2,3.5) ; (c)–(v) (c)–(v) (u2)–(v) (u2)–(v) (u3)–(v) (u3)–(u2) (u1)–(v) (u1)–(u2);
Problems [ $\rightarrow$ ]{} and [ $\rightarrow$ ]{} are NP-complete.
Appendix {#appendix .unnumbered}
========
= \[[fill=olive,olive,circle,draw,inner sep=1pt]{}\]
The minimal forbidden induced subgraphs for chordal graphs are well known. For all the classes at lower levels, their forbidden induced subgraphs with respect to its immediate super-classes are given on the edges. From them we are able to derive all the minimal forbidden induced subgraphs for each of these classes. For example, the characterization of unit interval graphs follows from the characterization of interval graphs and that we can find a claw in a chordal witness for an *asteroidal triple* (i.e., three vertices such that each pair of them is connected by a path avoiding neighbors of the third one) that is not a net or tent. Likewise, the minimal forbidden induced subgraphs of trivially perfect graphs can be derived from those of interval graphs and that all chordal witnesses for asteroidal triples and all holes that are not $C_4$’s contain a $P_4$.
[10]{}
Akanksha Agrawal, Daniel Lokshtanov, Pranabendu Misra, Saket Saurabh, and Meirav Zehavi. Feedback vertex set inspired kernel for chordal vertex deletion. In Klein [@soda-2017], pages 1383–1398. [](http://dx.doi.org/10.1137/1.9781611974782.90).
Claude Berge. Some classes of perfect graphs. In Frank Harary, editor, [*Graph Theory and Theoretical Physics*]{}, pages 155–166. Academic Press, New York, 1967.
Alan A. Bertossi. Dominating sets for split and bipartite graphs. , 19(1):37–40, 1984. [](http://dx.doi.org/10.1016/0020-0190(84)90126-1).
Ivan Bliznets, Fedor V. Fomin, Michal Pilipczuk, and Yngve Villanger. Largest chordal and interval subgraphs faster than [$2^n$]{}. , 76(2):569–594, 2016. A preliminary version appeared in ESA 2013. [](http://dx.doi.org/10.1007/s00453-015-0054-2).
Hans L. Bodlaender and Klaus Jansen. On the complexity of the maximum cut problem. , 7(1):14–31, 2000. [](http://dx.doi.org/10.1016/0020-0190(84)90126-1).
Leizhen Cai. Fixed-parameter tractability of graph modification problems for hereditary properties. , 58(4):171–176, 1996. [](http://dx.doi.org/10.1016/0020-0190(96)00050-6).
Yixin Cao. Linear recognition of almost interval graphs. In Robert Krauthgamer, editor, [*Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*]{}, pages 1096–1115. [SIAM]{}, 2016. Full version available at arXiv:1403.1515. [](http://dx.doi.org/10.1137/1.9781611974331.ch77).
Yixin Cao. Unit interval editing is fixed-parameter tractable. , 253:109–126, 2017. A preliminary version appeared in ICALP 2015. [](http://dx.doi.org/10.1016/j.ic.2017.01.008).
Yixin Cao and D[á]{}niel Marx. Interval deletion is fixed-parameter tractable. , 11(3):21:1–21:35, 2015. A preliminary version appeared in SODA 2014. [](http://dx.doi.org/10.1145/2629595).
Yixin Cao and D[á]{}niel Marx. Chordal editing is fixed-parameter tractable. , 75(1):118–137, 2016. A preliminary version appeared in STACS 2014. [](http://dx.doi.org/10.1007/s00453-015-0014-x).
V[á]{}clav Chv[á]{}tal and Peter L. Hammer. Aggregation of inequalities in integer programming. In Peter L. Hammer, Ellis L. Johnson, Bernhard H. Korte, and George L. Nemhauser, editors, [*Studies in Integer Programming*]{}, volume 1 of [*Annals of Discrete Mathematics*]{}, pages 145–162. Elsevier, 1977. [](http://dx.doi.org/10.1016/S0167-5060(08)70731-3).
Marek Cygan and Marcin Pilipczuk. Split vertex deletion meets vertex cover: New fixed-parameter and exact exponential-time algorithms. , 113(5-6):179–182, 2013. [](http://dx.doi.org/10.1016/j.ipl.2013.01.001).
Gabriel A. Dirac. On rigid circuit graphs. , 25(1):71–76, 1961. [](http://dx.doi.org/10.1007/BF02992776).
Tinaz Ekim and Dominique de Werra. On split-coloring problems. , 10(3):211–225, 2005. [](http://dx.doi.org/10.1007/s10878-005-4103-7).
Fedor V. Fomin, Serge Gaspers, Daniel Lokshtanov, and Saket Saurabh. Exact algorithms via monotone local search. In Daniel Wichs and Yishay Mansour, editors, [*Proceedings of the 48th Annual [ACM]{} Symposium on Theory of Computing (STOC)*]{}, pages 764–775. [ACM]{}, 2016. Full version available at arXiv:1512.01621. [](http://dx.doi.org/10.1145/2897518.2897551).
Fedor V. Fomin, Daniel Lokshtanov, Neeldhara Misra, and Saket Saurabh. Planar [F]{}-deletion: Approximation and optimal [FPT]{} algorithms. In [*Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science*]{}, pages 470–479. IEEE Computer Society, 2012. [](http://dx.doi.org/10.1109/FOCS.2012.62).
Fedor V. Fomin, Ioan Todinca, and Yngve Villanger. Large induced subgraphs via triangulations and [CMSO]{}. , 44(1):54–87, 2015. A preliminary version appeared in SODA 2014. [](http://dx.doi.org/10.1137/140964801).
Michael Garey and David S. Johnson. The rectilinear steiner tree problem is [NP]{}-complete. , 32(4):826–834, 1977. [](http://dx.doi.org/10.1137/0132071).
Michael R. Garey and David S. Johnson. . Freeman, San Francisco, 1979.
Michael R. Garey, David S. Johnson, and Larry J. Stockmeyer. Some simplified [NP]{}-complete graph problems. , 1(3):237–267, 1976. A preliminary version appeared in STOC 1974. [](http://dx.doi.org/10.1016/0304-3975(76)90059-1).
Fănică Gavril. Algorithms for minimum coloring, maximum clique, minimum covering by cliques, and maximum independent set of a chordal graph. , 1(2):180–187, 1972. [](http://dx.doi.org/10.1137/0201013).
Martin Charles Golumbic. Trivially perfect graphs. , 24(1):105–107, 1978. [](http://dx.doi.org/10.1016/0012-365X(78)90178-4).
Jens Gustedt. On the pathwidth of chordal graphs. , 45(3):233–248, 1993. [](http://dx.doi.org/10.1016/0166-218X(93)90012-D).
András Hajnal and Janos Surányi. ber die auflösung von graphen in vollständige teilgraphen. , 1:113–121, 1958.
Bart M. P. Jansen and Marcin Pilipczuk. Approximation and kernelization for chordal vertex deletion. In Klein [@soda-2017], pages 1399–1418. [](http://dx.doi.org/10.1137/1.9781611974782.91).
Philip N. Klein, editor. . [SIAM]{}, 2017. [](http://dx.doi.org/10.1137/1.9781611974782).
John M. Lewis and Mihalis Yannakakis. The node-deletion problem for hereditary properties is [NP]{}-complete. , 20(2):219–230, 1980. Preliminary versions independently presented in STOC 1978. [](http://dx.doi.org/10.1016/0022-0000(80)90060-4).
Carsten Lund and Mihalis Yannakakis. The approximation of maximum subgraph problems. In Andrzej Lingas, Rolf G. Karlsson, and Svante Carlsson, editors, [*Automata, Languages and Programming (ICALP)*]{}, volume 700 of [*LNCS*]{}, pages 40–51. Springer, 1993. [](http://dx.doi.org/10.1007/3-540-56939-1_60).
Haiko M[ü]{}ller. Hamiltonian circuits in chordal bipartite graphs. , 156:291–298, 1996. [](http://dx.doi.org/10.1016/0012-365X(95)00057-4).
Ren[é]{} van Bevern, Christian Komusiewicz, Hannes Moser, and Rolf Niedermeier. Measuring indifference: Unit interval vertex deletion. In Dimitrios M. Thilikos, editor, [*Graph-Theoretic Concepts in Computer Science (WG)*]{}, volume 6410 of [*LNCS*]{}, pages 232–243. Springer, 2010. [](http://dx.doi.org/10.1007/978-3-642-16926-7_22).
Mihalis Yannakakis. Computing the minimum fill-in is [NP]{}-complete. , 2(1):77–79, 1981. [](http://dx.doi.org/10.1137/0602010).
Mihalis Yannakakis. Node-deletion problems on bipartite graphs. , 10(2):310–327, 1981. [](http://dx.doi.org/10.1137/0210022).
Mihalis Yannakakis and Fănică Gavril. The maximum $k$-colorable subgraph problem for chordal graphs. , 24(2):133–137, 1987. [](http://dx.doi.org/10.1016/0020-0190(87)90107-4).
Jie You, Jianxin Wang, and Yixin Cao. Approximate association via dissociation. , 219:202–209, 2017. A preliminary version appeared in WG 2016. [](http://dx.doi.org/10.1016/j.dam.2016.11.007).
[^1]: We should not confuse the self-complementary property of graph classes and the self-complementary property of graphs—a graph is *self-complementary* if it is isomorphic to its complement. For example, the statement “*threshold graphs are self-complementary*” is incorrect, because most threshold graphs are not isomorphic to their complements, though the later are necessarily threshold graphs.
[^2]: We can find a maximum complete bipartite subgraph from a bipartite graph as follows. We find a maximum independent set of $G$ and a maximum independent set of its [bipartite complement]{} (i.e., after taking its complement, we discard all edges among the two parts, so the resulting graph remains bipartite with the same partition), and then return the larger of them [@yannakakis-81-bipartite-node-deletion].
[^3]: The following two paragraphs and two figures are for illustration purpose. They relate the intuition behind our reduction, but are not directly used in the arguments to follow. The reader may safely skip them if you prefer.
|
---
abstract: 'We have entered the era of big data astronomy. Sky surveys such as the LSST, Euclid, and WFIRST will produce more imaging data than humans can ever analyze by eye. The challenges of designing such surveys are no longer merely instrumentational, but they also demand powerful data analysis and classification tools that can identify astronomical objects autonomously. To gradually prepare for the era of autonomous astronomy, we present our machine learning classification algorithm for identifying strong gravitational lenses from wide-area surveys using convolutional neural networks; LensFlow. We train and test the algorithm using a wide variety of strong gravitational lens configurations from simulations of lensing events. Images are processed through multiple convolutional layers which extract feature maps necessary to assign a lens probability to each image. LensFlow provides a ranking scheme for all sources which could be used to identify potential gravitational lens candidates by significantly reducing the number of images that have to be visually inspected. We further apply our algorithm to the *HST*/ACS i-band observations of the COSMOS field and present our sample of identified lensing candidates. The developed machine learning algorithm is much more computationally efficient than classical lens identification algorithms and is ideal for discovering such events across wide areas from current and future surveys such as LSST and WFIRST.'
author:
- Milad Pourrahmani
- Hooshang Nayyeri
- Asantha Cooray
bibliography:
- 'refs.bib'
title: '[[[L]{}]{}: A Convolutional Neural Network in Search of Strong Gravitational Lenses]{}'
---
Introduction {#intro}
============
Gravitational lensing, a prediction of Einstein’s general theory of relativity, is a very powerful tool in cosmological studies. It has been used extensively to understand various aspects of galaxy formation and evolution (e.g. [@refsdal1964gravitational; @blandford1992cosmological; @nayyeri2016candidate; @postman2012cluster; @atek2015new]). This involves accurate cosmological parameter estimation [@treu2010strong], studies of dark matter distribution from weak gravitational lensing events [@kaiser1993mapping; @velander2014cfhtlens], black-hole physics [@peng2006probing] and searches for the most distant galaxies [@coe2012clash; @oesch2015first], among others.
One of the main goals of observational cosmology is to constrain the main cosmological parameters that dictate the evolution of the Universe [@tegmark2004cosmological; @komatsu2009five; @weinberg2013observational]. Strong gravitational lensing has been utilized over the past few years to estimate and contain these cosmological parameters [@broadhurst2005strong; @suyu2013two; @suyu2014cosmology; @goobar2016; @angnello2017; @more2017]. This is achieved through accurate lens modeling of such events and comparing the model predictions with observations (such as with observations of lensing induced time delays [@eigenbrod2005; @treu2010strong; @suyu2014cosmology; @rodney2016; @treu2016]. In a recent study, for example, @suyu2013two used combined WMAP, Keck and [*HST*]{} data on gravitational time delays in two lensed sources to constrain the Hubble constant within $4\%$ in a $\rm \Lambda CDM$ cosmological framework.
One of the key aspects of gravitational lensing is its use as natural telescopes through boosting the observed signal and increasing the spatial resolution [@treu2010strong]. This is quite advantageous in searches for distant and/or faint objects at moderate observing costs and has been utilized extensively in various surveys in searches for such objects, the identification of which would not have been possible without it [@bolton2006sloan; @heymans2012cfhtlens]. Given that the number of identified lenses for different classes of galaxies rises sufficiently due to better lens finding algorithms, by modeling the lenses and using spectra stacking techniques, we can better understand the physical and chemical composition of farther and fainter galaxies which in turn would excel our understanding of galaxy evolution [@wilson2017; @timmons2016]. In the past few years, deep diffraction limited observations have also taken advantage of gravitational lensing to extend the faint end of the luminosity function of galaxies by a few orders of magnitude [@atek2015new] to produce the deepest images of the sky ever taken across multiple bands. Strong gravitational lensing events have been observed extensively in such surveys as galaxy-galaxy lensing in field surveys such as the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) [@grogin2011candels; @koekemoer2011candels] and the Cosmological Evolution Survey (COSMOS) [@scoville2007cosmos; @capak2007] or as cluster lensing from observations of nearby massive clusters [@postman2012cluster; @treu2015grism; @lotz2017frontier] with [*Hubble*]{} Space Telescope. These yield identification of the first generations of galaxies at $z>3$ (and out to $z\sim11$; [@oesch2015first]) to study galaxy formation and evolution at the epoch of re-ionization. This was, in fact, one of the main motivations behind Hubble cluster lensing studies such as CLASH and Frontier Fields [@postman2012cluster; @lotz2017frontier]. These magnification provided by the strong lensing could potentially raise the observed flux by as much as a factor of hundred (depending on configuration). The power of gravitational lensing could also be used in the detection of low surface brightness emission from extended objects such as far-infrared and radio emissions from dust and molecular gas at $z\sim2-3$. This is indeed one of the main techniques in observing these systems even in the era of powerful mm/radio telescopes such as ALMA.
Strongly lensed galaxies are normally targeted and identified from dedicated surveys [@bolton2006sloan]. Traditionally these lens identifications are either catalog-based, in which lensing events are identified by looking for objects in a lensing configuration, or pixel based, with the search starting from a set of pixels. These lensing searches are normally computationally challenging in that individual pixels are constantly compared with adjacent ones and they could be biased towards a given population and/or brighter objects. Recent far-infrared wide area observations (such as with *Herschel*) significantly advanced these searches for lensed galaxies by adopting a simple efficient selection technique of lensed candidates through observations of excessive flux in the far infrared (as an indication of strong lensing events supported by number count distributions; [@nayyeri2016candidate; @wardlow2012hermes]). However such surveys are also biased towards populations of red dusty star-forming galaxies (missing any blue lenses) and are not always available across the full sky (the *Herschel* surveys that were targeted had $\rm \sim0.2-0.4\,deg^{-2}$ lensing events, much lower than expected from optical surveys). Given that tests of cosmological models require simple unbiased selection function, it is important to have a complete unbiased catalog of lensing events.
![Schematic representation of an artificial neuron. The weighted sum of the neurons in the previous layer (green circles), plus the internal bias of the neuron, are mapped as the output of the neuron by an activation function. This model is captured by Equation \[eq:activation\]. During the learning process, weights and biases of the neurons will be adjusted to achieve the desired network output.[]{data-label="fig:neuron"}](single_neuron.png){width="50.00000%"}
We have entered the era of big data astronomy. Sky surveys such as the LSST, Euclid, and WFIRST will produce more imaging data than humans can ever analyze by eye. The challenges of designing such surveys are no longer merely instrumentational, but they also demand powerful data analysis and classification tools that can identify astronomical objects autonomously. Fortunately, computer vision has drastically improved in the last couple of years to make autonomous astronomy possible. The past couple of years has been the most exciting era in the field of machine learning (ML). Researchers from both the public and the private sectors have achieved landmarks in developing image recognition/classification techniques. One of the most exciting recent events in the ML community was the release of [TensorFlow]{} by Google, a parallel processing platform designed for development of fast deep learning algorithms [@abadi2016tensorflow]. Packages like [TensorFlow]{}, [Caffe]{}, and others have enabled researchers to develop very complex and fast classification algorithms. Among these deep learning programs, ConvNets have deservingly received a lot of attention in many fields of science and industry in the past few years [@krizhevsky2012imagenet]. Complex ConvNets such as [GoogleNet]{} and [AlexNet]{}, which are publicly available, have achieved superhuman performance on the task of image classification. Google’s [TensorFlow]{} has made it possible to easily develop parallelized deep learning algorithms which if integrated with Google’s Tensor Processing Units (TPUs), could address the data mining challenges in the field of astronomy. The field of astronomy and observational astrophysics should take advantage of these new image classification algorithms. For instance, our team has been working on developing LensFlow, a ConvNet that can be used to search for strong gravitational lenses. Our work will be publicly available on Github after publication.
Non-machine learning computer algorithms have been previously used for finding gravitational lenses [@alard2006; @arc_finder2012]. For instance, @arc_finder2012 use two algorithms called [RingFinder]{} and [ArcFinder]{}. The former uses color information and the latter detects arc-like pixels. Initially, [ArcFinder]{} polishes the images by convolving a smoothing kernel. For each pixel, an estimator of elongation is calculated by taking the ratio between the sum of the flux of a few pixels along the horizontal line and the maximum value of a few nearby pixels along the vertical line which pass through the pixel in hand. This process is repeated for all pixels and those with smaller than an specified elongation threshold are set to zero to create a sharp arc map. An arc map that satisfies thresholds on the arc properties such as the size and surface brightness will be selected as an arc candidate for further visual inspection. Unlike deep learning, such classical algorithms are not easily parallelized and they can be computationally more expensive depending on the deepness of the ConvNets used. They also require threshold tuning which may cause insensitivity to smaller arcs. However, they do not require a massive training dataset. Even though more challenging, creating a large dataset could eliminate biases toward certain arc-lens morphologies. As we will discuss in Section \[cosmos\], these algorithms do suffer from the same lens contaminants as ours. The hope is that machine has the capacity to improve its performance with better training datasets and improved architecture while remaining computationally efficient but such improvements are not trivial regarding classical algorithms.
Other researchers [@1st; @2nd; @3rd] also find deep learning a suitable solution for finding gravitational lenses. @3rd use residual ConvNets with 46 layers. Residual ConvNets are modified ConvNet that do not suffer from layer saturation as ordinary ConvNets do. After adding more than 50 layers, the accuracy of ordinary ConvNets no longer improves and the training becomes more challenging. @he2016deep were able to overcome this issue by providing residual maps in between layers, which has been employed by @3rd. They have simulated LSST mock observations in a single band and have trained and tested their network on these images. @2nd have trained their ConvNet using multiple color bands and have applied it to Canada-France-Hawaii Telescope Legacy Survey. @1st have searched for lenses in Kilo Degree Survey by training their ConvNet on cataloged luminous red galaxies.
In our independently developed work, we focus on the morphology of the lenses and only rely on one color band, similar to @1st and @3rd. Our lens simulation method is very similar to @1st where we both merge simulated arcs with real images of galaxies to preserve the complexity of the physical data. In contrast to others, we do not discriminate against different sources found in the COSMOS field. Artifacts, starts, and other sources have been included in our training dataset so LensFlow can be directly applied to fields without a need for a catalog with galaxy type information. The deepness of our ConvNet is comparable to @1st and @2nd but it is shallower than @3rd. As mentioned in @2nd, the morphology of lenses are much simpler than the morphology of daily objects and human faces which extremely deep ConvNets are developed for. However, the cost to performance ratio of ConvNets with varying deepness has not been studied yet. The effectiveness of deeper ConvNets cannot be compared between ours (and [@1st] ) and @3rd since they have not applied their algorithm to physical data. However, they have studied the change in the performance of their ConvNet by varying the Einstein radii and signal-to-noise ratio of their lenses.
This paper is organized as follow. In Section \[intro to ML\], we will explain the principal concepts underlying neural networks, supervised learning, and ConvNets. A supervised learning algorithm requires a large sample of labeled images, known as the training dataset. In Section \[datasets\], we will discuss the procedure we have taken to create our training and testing datasets. Before feeding the images to a ConvNet, they must be normalized and should be enhanced. The details of these methods are discussed in Section \[normalization\]. Section \[convnet arch\] will lay out the architecture of LensFlow and Section \[acc\] will illustrate LensFlow’s performance on the testing dataset. We will conclude by sharing and analyzing the scan results of the COSMOS field in Section \[cosmos\]. Throughout this paper, we assume a standard cosmology with $H_0=70\,\text{kms}^{-1}\text{Mpc}^{-1}$, $\Omega_m=0.3$ and $\Omega_\Lambda=0.7$. Magnitudes are in the AB system where $\text{m}_{\rm AB}=23.9-2.5\times\text{log}(f_{\nu}/1\mu \text{Jy})$ [@oke1983secondary].
![Different filters used in the first convolutional layer. The pixels in each box represent the weights of a convolving neuron which are connected to a $5 \times 5$ region input image. As these filters convolve over the entire input image, they generate 16 feature maps. Red pixels have a positive contribution and blue pixels have a negative contribution toward the activation of the convolving neuron. These filters are helpful for edge and texture recognition.[]{data-label="fig:filters1"}](filters1.png){width="50.00000%"}
{width="50.00000%"}
\[fig:feature\_maps\]
Deep Learning Algorithms {#intro to ML}
========================
Artificial neural networks are inspired by biological neurons. Just like biological neurons, artificial neurons receive input signals and send out an output signal to other neurons (see Figure \[fig:neuron\]). The synaptic connections between neurons are known as weights and the output of a neuron is know as its activation. To reduce the computational time and simplify neural network models, neurons are placed in consecutive layers rather than having a connection with every other neuron. Neurons from one layer cannot talk to each other or to the neurons in arbitrary layers; they may only send their signal to the neurons in the succeeding layer. A neuron receives the weighted sum of the activation of all the neurons in the previous layer, adds an internal parameter known as the bias and maps this sum to a value computed by an activation function (e.g. sigmoid, hyperbolic tangent, rectilinear, softmax). This model can be stated mathematically by the following equation: $$\label{eq:activation}
a_{i}^{l} = f(\sum_j{a_{j}^{l-1} w_{j \rightarrow i}^{l} } + b_{i}^{l} ).$$ Here, $a_{i}^{l}$ is the activation of the neuron in hand (i.e. the $i$’th neuron in the $l$’th layer), $f$ is the activation function of this neuron, $a^{l-1}_j$ is the activation of the neuron $j$ in layer $l-1$ (the previous layer), $w_{j \rightarrow i}^{l}$ is the synaptic weight connecting $i$’th neuron in layer $l$ to the $j$’th neuron in layer $l-1$, and $b_{i}^{l}$ is the bias of the neuron to adjust its activation sensitivity. The first layer, i.e. the input layer, in a deep learning neural net acts as a sensory layer, analogous to the retina. As it gets analyzed, the information from the input layer travels through multiple layers until it reaches the final layer called the classification layer. Each class of images corresponds to a classifying neuron. In our case, we have a neuron corresponding to unlensed and another to lensed images. The neuron with the highest output determines which class an input image is placed in.
A neural net learns how to classify images by adjusting the weights between its neurons and the biases within them, having one goal in mind: minimizing the loss function $C(\boldsymbol{x}, \boldsymbol{y})$. The loss function, sometimes called the cost function, can take many forms but it has to captures the misfiring of the classification neurons, i.e. the deviation between the target class versus the predicted class. This is why such algorithms are known as supervised learning algorithms, in contrast to unsupervised techniques. A common choice for the loss function is the cross-entropy loss function with the following form [@nielsen]: $$\label{eq:cost1}
\resizebox{.85\hsize}{!}{$ C(\boldsymbol{x}, \boldsymbol{y}) =\sum_{j =\text{unlensed}, \quad \\ \text{lensed}}{y_j \ln a_{j}^{L} + (1-y_j) \ln (1 - a_{j}^{L})} $}.$$ $a_{j}^{L}$ is the activation of neurons in the final (classifying) layer. $\boldsymbol{x}$ is the input data in the vector form and $\boldsymbol{y}$ represents the desired activations of the two classifying neurons. Of course, this function depends on the architecture of the neural net, weights, and biases, but they have not been expressed explicitly. As an example, if an image is a lens, its target output has to be $(0.0, 1.0)$, meaning the activation of the unlensed neuron should be zero and the activation of the lensed neuron should be unity. During the training process of a neural net, images from a training dataset are presented to the network and the weights and the biases are adjusted to minimize the loss function for those images. The parameter space is massive and a change in one of the parameters of a neuron will affect the activation of a series of neurons in other layers. The first challenge is solved by minimizing algorithms such as the stochastic gradient descent (SGD) and the second one is solved via back-propagation. We won’t go in the details of these two techniques, but it worths emphasizing on the stochasticity of SGD. Stochasticity refers to randomly selecting images from the training set and bundling them in one batch. The loss function for the batch is the average of the loss function for individual images. It is very important to use a batch rather than training the neural net with one image at a time. Loosely speaking, using a batch would drastically improve classification accuracy of the network since neurons learn the features in images rather than memorizing examples.
ConvNets are a class of neural networks with multiple convolutional layers. A convolutional layer consists of a set of convolving neurons (on the order of 10 neurons) which can be connected to a small rectangular region of an image. The set of weights of a convolving neuron is known as a filter and are subject to change as the ConvNet learns. A filter scans an entire image by striding (convolving with specified steps) over the image and assembling its output into an image knows as a feature map. Feature maps contain information such as texture and edges. See Figure \[fig:filters1\] as an example of a set of filters in a LensFlow convolutional layer. Two extracted feature maps of a physical lens (not simulated) have been shown in Figure \[fig:feature\_maps\].
![Anti-Gaussian image enhancement technique. The bright source at the center of the original (bottom) image has been deemed by an anti-Gaussian function (middle) which attenuates central pixel values (see Equation \[eqn:antigaussian\]). In addition, gamma correction has been employed to adjust the contrast, resulting in an enhanced (top) image where the arcs stand out. Such enhanced and normalized images are the inputs of the ConvNet.[]{data-label="fig:antigaussian"}](enh_eg.png){width="50.00000%"}
Methodology
===========
Training and Testing Datasets {#datasets}
-----------------------------
Neural networks learn to classify data by learning from examples, referred to as the training dataset. A training dataset usually contains a few thousands of pre-classified images. Our training dataset contains $15,000$ unlensed sources and $15,000$ lensed sources. To make this dataset, we used [SExtractor]{} to obtain a catalog of sources from a few high-quality central tiles as well as a few low-quality edge tiles to include noisy images and artifacts. A cutout of $200 \times 200$ pixels was made around the identified sources and stored individually, labeled as unlensed. After training and finalizing the architecture of the ConvNet, using a source extraction software is not necessary since an entire tile can be divided into smaller tiles and scanned to identify locations with high lensing probability.
Creating lenses are more challenging and more involved. To create these lenses, we had two options; we could either artificially boost up the number of know lenses or simulate them. We tried both methods. For the first method, we used 17 out of 18 lenses discussed in @cosmos2011 paper.[^1] Since the number of known lenses is very limited, we formed $200 \times 200$-pixel cutouts by applying all combination of the following transformations: (a) Rotating from 0 to 360 degrees in steps of 30 degrees, (b) Shifting the center by $0$, $\pm10$, and $\pm 25$ pixels in horizontal and vertical directions, (c) Scaling by 0.8, 1, 1.1 and (d) Adding a small Gaussian noise after transformation. This combination results in 900 transformed images per lens, boosting up our training dataset to fifteen thousand.
\[fig:convnet\] 
------------------------------------------------------- -------------------------------------------------------
{width="50.00000%"}\[fig:acc\] {width="50.00000%"}\[fig:roc\]
------------------------------------------------------- -------------------------------------------------------
\[fig:roc\]
![Normalized ranking of tracer lenses by LensFlow. LensFlow ranks images based on their lensing probabilities, the highest probability has a rank of 1, the second highest probability has a rank of two, etc. The normalized rank (i.e. absolute rank divided by the total number of unlensed and lensed samples) of top 10 (out of 15) identified COSMOS lenses that were added to 3488 COSMOS sources has been plotted. The blue lines show the result of scanning images once and the orange lines represent scanning the 8 symmetry group transformations of the square applied to each image to artificially increase the number of data points. No significant improvement is detected. This plot indicate LensFlow can place the majority of the physical lenses in the top $8\%$ minority.[]{data-label="fig:top_ranks"}](top_ranks.png){width="45.00000%"}
However, the training dataset using the transformation method does not contain a wide range of lenses and lenses with different structures might be missed. For this reason, we have also created a simulated training dataset using [LensTool]{} [@lenstool1; @lenstool2; @lenstool3]. [LensTool]{} receives input parameters of the lensing model and generates a lensed image of the background galaxy without a foreground lensing source. We will refer to these as arcs. Lens model parameters such as redshift of the background and foreground galaxies, their ellipticity, orientation, and their relative positions were randomly changed to create a wider range of arcs, from complete Einstein rings with different radii to arcs with different orientations and sizes. The code for generating these arcs which includes the range of model parameters will be included in our online distribution. After generating a wide range of arcs, they were merged with raw images. Both images, the raw sources, and the simulated arcs, were normalized and the pixels of the arc images were multiplied by a random number between $0.3$ and $0.8$ so a range of relative arc to foreground luminosity would be captured in the training set. For a limited number of generated lenses, other ranges were used to create extremely faint and extremely bright lenses. $85\%$ of the lenses were generated by selecting elliptical sources while the remaining were sources randomly selected from tile 55. A bias like this is necessary since the foreground of the know COSMOS lenses are all elliptical. After examining the simulated lenses by eye and eliminating lenses with unnatural geometry, an eightfold transformation was performed to increase the number of lenses up to fifteen thousand. These transformations come from 8 elements of the symmetry group of the square, namely: $0\,^\circ$, $90^\circ$, $180^\circ$ and $270^\circ$ rotations, and horizontal, vertical, diagonal and anti-diagonal mirroring.
![Examples of common lens contaminants. Due to their similar visual geometries with lenses, ring galaxies (1-2) and spiral galaxies (3) are often false positive classifications of LensFlow. Galaxies with satellite or tidal interactions (4 and 6) and artifacts from bright sources (5) also contaminate the lens candidates.[]{data-label="fig:contaminants"}](contaminant.jpg){width="50.00000%"}
At the end, 500 lenses and 500 raw images were removed from the training set and placed in a separate set called the testing dataset to measure the performance of the ConvNet as the network gets trained on the training dataset.

Image Normalization and Enhancement {#normalization}
-----------------------------------
Before inputting the images to our neural net, we normalize and enhance them. For deep learning purposes, there are different methods of image normalization to choose from, but not normalizing is not a choice. This is due to the fact that raw images come in a wide range of values. However, a limited number of neurons are not capable of handling such variations. On top of that, the goal of a ConvNet is to learn geometric features rather than the statistical properties of the pixels, which could be unwillingly introduced while constructing the training dataset. To address these two issues, all input images must be normalized. Our method of normalization consists of two steps. In the first step, pixel values are shifted by their average so their new mean would be zero (Equation \[eqn:norm1\]). In the second step, pixel values in an image are divided by their standard deviation (Equation \[eqn:norm2\]). $$\label{eqn:norm1}
{p_{ij}} \leftarrow {p_{ij}} - \frac{\sum_{i j}{p_{ij}} }{i_{max} j_{max}}$$ $$\label{eqn:norm2}
{p_{ij}} \leftarrow \frac{p_{ij}}{\sqrt{ \sum_{i j}{p_{ij}^2}}}$$
$p_{ij}$ stands for the value of the pixel at row $i$ and column $j$. $i_{max}$ and $j_{max}$ indicate the total number of rows and columns, respectively. In order to improve our classification accuracy, we made the arcs stand out by apply an anti-Gaussian function (Equation \[eqn:antigaussian\]) on each image. This function will attenuate the central pixels which belong to the foreground galaxy (see Figure \[fig:antigaussian\]). The width of this function is about a quarter of the cutout width.
$$\label{eqn:antigaussian}
{p_{ij}} \leftarrow (1-e^{-((i-100)^2 +(j-100)^2)/1000}) {p_{ij}}$$
At this step, the images are ready for contrast adjustments which is done using the gamma correction. The pixels must be shifted up in order to eliminate zero or negative pixel values.
$$\label{eqn:gamma}
{p_{ij}} \leftarrow (p_{ij} + 1.01 \min(\boldsymbol{p}) )^{0.15}$$
The normalization steps are applied again and at the end, the negative pixels are dropped. $$\label{eq:gamma}
{p_{ij}} \leftarrow \min(p_{ij}, 0)$$ This improves the image quality by removing low-intensity pixels which mostly include noise. Now images may be inputted to the neural net for training or for classification of new data. See Figure \[fig:antigaussian\] for an example of image normalization and enhancement.
---------------------- ----------------- ----------------- ------------- ----------------- -----------------------
Lens Object ID Right Ascension Declination Einstein Radius Magnitude$^{\dagger}$
(deg) (deg) (arcsec) (AB)
$1^*$ COSMOS0108+4029 150.2849 2.6749 1.51 19.08
$2 $ COSMOS5844+3753 149.6860 1.6315 1.53 21.65
$3^*$ COSMOS5831+4334 149.6298 1.7263 1.17 20.76
$4 $ COSMOS0142+5447 150.4285 1.9133 1.49 18.67
$5^*$ COSMOS0124+5121 150.3522 1.8559 0.79 22.31
$6^*$ COSMOS0047+5023 150.1986 1.8398 1.71 20.64
$7 $ COSMOS0027+0513 150.1140 2.0871 1.80 18.87
$8^*$ COSMOS0208+1422 150.5355 2.2396 1.54 20.13
$9^*$ COSMOS0056+1226 150.2366 2.2072 2.04 18.78
$10 $ COSMOS0237+2652 150.6560 2.4478 1.88 20.81
$11^*$ COSMOS0012+2015 150.0526 2.3377 0.92 19.37
$12 $ COSMOS5918+1911 149.8272 2.3198 1.89 19.86
$13 $ COSMOS0211+2955 150.5486 2.4986 1.91 20.82
$14 $ COSMOS0157+3510 150.4879 2.5863 1.20 20.68
$15^*$ COSMOS0018+3845 150.0770 2.6460 1.26 23.62
$16 $ COSMOS5844+3753 149.6860 1.6315 1.45 21.65
$17 $ COSMOS0247+5601 150.6974 1.9337 0.82 20.24
$18 $ COSMOS0229+1441 150.6260 2.2448 0.51 26.67
$19 $ COSMOS5954+1128 149.9791 2.1913 0.97 21.19
$20^*$ COSMOS0038+4133 150.1595 2.6927 0.55 20.51
$21^*$ COSMOS5921+0638 149.8407 2.1106 0.72 20.43
\[tab:lens\_coords\]
---------------------- ----------------- ----------------- ------------- ----------------- -----------------------
[Notes. ${}^*$: also identified in @cosmos_paper. $^{\dagger}$: from ACS i-band.]{}
Architecture of LensFlow ConvNet {#convnet arch}
--------------------------------
The architecture of the data determines the dimensionality of the ConvNet layers. We use $200 \times 200 \times 1$ images where $1$ indicate the number of color channels[^2]. In our code, we have localized these parameters as input variables to ease the transition from one to multiple color bands. Classifying lenses with multiple color bands will be easier and more accurate since foreground and background sources have a color contrast. However, we have chosen to use one color channel so our algorithm can be sensitive to geometry rather than color contrast in order to expand its applicability to a wider range of bands as well as eliminating its need for multi-band images when unavailable. As it can be seen in Figure \[fig:convnet\], after normalizing and enhancing these single-channeled images, LensFlow applies an average-pooling of kernel $4 \times 4$ and stride of $4$. This means that the image is divided into $4 \times 4$ segments and the average of each segment will become a pixel of the down-sampled output image. These down-sampled images are then fed to the first convolutional layer which consists of sixteen $5 \times 5$ filters. The hyperbolic tangent function is chosen as the activation function for the neurons in this layer as well as the upcoming convolutional layers. This layer outputs sixteen feature maps which should be interpreted as one image with 16 feature channels, i.e. a $50 \times 50 \times 16$ image using our notation above. These feature maps are down-sampled using a max-pooling layer of size $2 \times 2$ and stride of $2$ resulting in a $25 \times 25 \times 16$ reduced feature map. This means the input of the max-pooling layer is divided into $2 \times 2$ tiles whose maximum values are selected to form down-sampled feature maps. This $25 \times 25 \times 16$ feature map is inputted to the second convolutional layer with twenty-five $5 \times 5$ filters where each filter will scan all channels by combining them linearly, hence outputting a $25 \times 25 \times 25$ feature map to be down-sampled to $13 \times 13 \times 25$. Another convolutional layer with thirty-six $4 \times 4$ filters paired with a max-pooling layer is followed, reducing the data size to a $7 \times 7 \times 36$ feature map. This information is flattened into a 1-D array where it is fully connected to 128 sigmoid neurons. The output of these neurons is inputted to the classifying layer with 2 neurons with the softmax activation function: $$\sigma_c(\mathbf{Z}) = \frac{e^{Z_c}}{e^{Z_{\text{unlensed}}} + e^{Z_{\text{lensed}}}} \, .$$ Here, each component of $\mathbf{Z}$ is the total weighted input plus the bias of each of the classifying neurons, $z_c = \sum_j{a_{j}^{L-1} w_{j \rightarrow c}^{L} } + b_{c}^{L} $ . $c$ specifies whether we are talking about the unlensed neuron or the lensed neuron. This function enhances the training process and it will assign a pseudo-probabilistic number to each class (since the probabilities across all classes add up to 1). The higher $\sigma_{c = \text{lensed}}$ for an image, the higher its probability to be a lens which can be used to rank the images based on their lensing probability.
Measuring Performance {#acc}
---------------------
To optimize our ConvNet, we have chosen a cross-entropy function as our loss function which we minimize using Adam’s Optimizer. During the training phase, 64 unlensed and 64 lensed images were placed in a batch of 128 images. This combination technique will prevent under- or over-representation of classes even if the training size for different classes contain a different number of examples. This process was iterated one thousand times. An example of accuracy as a function of the number of iterations has been graphed in Figure \[fig:acc\] where the accuracy plateaus at $91.5\%$. To obtain this accuracy, any image with a lensing probability lower/higher than $0.5$ was labeled lensed/unlensed. These predicted labels were compared against the true label of each image. Thus, accuracy is defined as the number of correct classifications over the number of images in the testing set. $$\label{eq:acc}
\text{accuracy} \equiv \frac{N(\text{lensed}|p_\text{lens} > 0.5)}{ N(\text{lensed}) + N(\text{unlensed})}$$ This result was achieved with a “ReLu”[^3] neurons in the convolutional layers. After switching to hyperbolic tangent, our results improved to $96\%$ meaning that for every 100 test images, 4 were misclassified. If the testing dataset contains simulated lenses, the reader should not translate these accuracies as a measure of physical lens classification performance, neither for our paper nor for others. Unlike physical lenses, simulated lenses may contain statistical artifacts and are limited in shape. Hence, the true performance of a classifier should be measured with its ability to locate physical lenses in a sufficiently large field.
To optimize computational time while addressing this issue, we have mixed 15 known lenses from the COSMOS field [@cosmos2011] with 3.5 thousand images from the same field and have assigned a lensing probability to each image that was obtained from the output of the lens class neuron. These images were ranked based on their lensing probabilities. The relative ranking for these lenses, i.e. their rank divided by the total number of scanned images, has been plotted in Figure \[fig:top\_ranks\]. We have also tried eightfold scanning where the probabilities of eight transformations discussed above (Section \[datasets\]) have been summed to improve the accuracy. This technique does not show an improved ranking of the lenses. The majority of the lenses fall under top $8\%$ where they can be further examined by eye. In the next section, we will discuss the images that contaminate the high-rank candidates. We will also discuss the remedies to further separate lenses from other sources.
Results & Discussion {#cosmos}
====================
A catalog of the strong gravitational lenses in the COSMOS field has been generated previously [@cosmos_paper] by looking at early type bright galaxies in the redshift range of $0.2 \leq z \leq 1.0$ and visually inspecting and cataloging sixty high and low-quality lens candidates. In contrast, we have examined all sources in [*HST*]{}/ACS i-band of the COSMOS field that are more extended than 200 pixels and are $1.5 \sigma$ brighter than the background, i.e. 230,000 images. After scanning these images with LensFlow, we inspected the top outputs and selected 21 as good quality lens candidates which are presented in Figure \[fig:cosmos\_lenses\]. Their coordinates and other physical parameters are also listed in Table \[tab:lens\_coords\]. The lens candidates previously identified by @cosmos_paper are marked in Table 1. Among sixty lens candidates presented in @cosmos_paper, 25 were more extended than our image cutout size of $200 \times 200$ pixels which translates to larger than $3 \arcsec \times 3 \arcsec$ (e.g. COSMOS0208+1422, COSMOS0009+2455) while some others were considered as lens contaminates or low quality lenses rather than as good lens candidates during our visual selection process (e.g. COSMOS0055+3821). To resolve the former issue, a secondary scan can be performed by down-sampling the field to half its size to include larger lenses.
The main contaminants of our high lensing probability images fall into the main categories of, spiral galaxies with contamination arising from spiral arms and/or ring-like structures, tidally interacting galaxies and satellite/nearby galaxies in a lens-like configuration and image artifacts. Examples of these contaminants are shown in Figure \[fig:contaminants\]. Given that these contaminants are similar to many of our training examples and that it is also time-consuming to separate them by human eye we created another training dataset which would eliminate such contaminants by only looking for perfect Einstein rings. After another scan of the field, we were able to identify the Einstein ring shown in the last two panels of Figure \[fig:cosmos\_lenses\].
These indicate that perhaps, rather than increasing the number of layers in series, we must aggregate parallel ConvNets, each one trained to extract lenses from one class of contaminants only. Furthermore, reducing the down-sampling factor would, in turn, increase the resolution of the input images which might increase the gap between spirals and lenses at a greater computational cost. And perhaps, it would be more efficient to have separated training datasets for visually distinct lenses, as tried above, rather than having one large training dataset with a variety of lenses. We will study these algorithms in our future publications.
Summary
=======
In this paper, we have emphasized the importance of gravitational lenses in the field of cosmology and have presented an introduction to neural networks including ConvNets. Furthermore, we have laid out the procedure for constructing simulated images for training and testing LensFlow. The importance and details of our normalization and enhancement methods have been discussed. Then, we have discussed the architecture of LensFlow, its accuracy on test data, and its performance on real data. The latter was done by scanning *HST*/ACS i-band images of the COSMOS field and listing the lens candidates that were visually separated from images with large assigned lens probability by LensFlow.
Acknowledgement {#acknowledgement .unnumbered}
===============
Financial support for this paper was provided by NSF grant AST-1213319 and GAANN P200A150121. Figure \[fig:neuron\] and \[fig:convnet\] were generated using [www.draw.io](www.draw.io). The backbone of our algorithm was initially inspired by Hvass Laboratories on Github [@HvassLabs]. Figure \[fig:filters1\] was plotted using his code as well. We would like to thank Pierre Baldi for valuable feedback. We would also like to thank Noah Ghotbi and Aroosa Ansari for their assistance.
Remarks on Present and Future of LensFlow Code {#appendix1}
==============================================
LensFlow has been written in Python 3.5.2 and to enhance the user interface, it was developed in a Jupyter Notebook environment which enables the user to easily modify the code, plot, and read the documentation alongside the code. LensFlow relies on Keras, a high-level neural networks API, written in Python and capable of running on top of [TensorFlow]{}[@Chollet2015]. Our product consists of three main notebooks: Development notebook, Arc Maker notebook, LensFlow notebook.
The Development notebook contains the necessary code for file management, to automate source extraction, to merge raw images with simulated arcs, to create training and testing datasets, for automated handling of 81 COSMOS tiles, and includes other helpful functions. Currently, LensFlow is in its development stage and requires a list of $200 \times 200$-pixel FITS cutouts. In future versions, we will eliminate this need and users could provide a large FITS tile with/without a catalog of identified sources which would require less/more time to scan the data. This can be done by convolving the ConvNet itself over a FITS tile and identify spots with high lensing probabilities. To make this fully functional, the training dataset must include off centered and cropped sources. However, for the purposes of training and optimizing, isolated cutouts are more useful since sources from different tiles can be mixed without facing memory shortage and since sources have to be accessed again and again as one would change the architecture of the ConvNet.
Arc Maker notebook automates the arc creation process and uses [LensTool]{}. LensFlow notebook contains the ConvNet and can be used to specify the architecture of the ConvNet, to train or to test the ConvNet, and to search through new data for lens candidates. This notebook also contains the functions that we have used to generate some of the plots in this paper as well as the feature to view multiple top lens candidates in one screen using DS9. This feature will help the user to quickly examine the output of the LensFlow. Due to the restrictions that [TensorFlow]{} and [LensTool]{} apply, LensFlow would only function in a Linux operating system.
[^1]: One of the lens candidates from @cosmos2011 is more similar to a tidal interaction.
[^2]: In this paper and in our code, we have adapted the $N \times H \times W \times C$ format from [TensorFlow]{} where $N$, $H$, $W$, and $C$ stand for number of input images in a batch, height, width, and number of color (or feature) channels.
[^3]: A rectified linear unit is a neuron with the following common activation function:$ f(x) = \max(0, x)$.
|
---
abstract: 'We give a physical interpretation of the entries of the reflection $K$-matrices of Baxter’s eight-vertex model in terms of an Ising interaction at an open boundary. Although the model still defies an exact solution we nevertheless obtain the exact surface free energy from a crossing-unitarity relation. The singular part of the surface energy is described by the critical exponents $\alpha_s = 2 - \frac{\pi}{2\mu}$ and $\alpha_1 = 1 - \frac{\pi}{\mu}$, where $\mu$ controls the strength of the four-spin interaction. These values reduce to the known Ising exponents at the decoupling point $\mu=\pi/2$ and confirm the scaling relations $\alpha_s = \alpha_b + \nu$ and $\alpha_1 = \alpha_b -1$.'
address: 'Department of Mathematics, School of Mathematical Sciences, Australian National University, Canberra ACT 0200, Australia'
author:
- 'M. T. Batchelor and Y. K. Zhou[@byline]'
date: 'September 1, 1995'
title: |
Surface Critical Phenomena and Scaling\
in the Eight-Vertex Model
---
Our understanding of phase transitions and critical phenomena has been greatly enhanced by the study of exactly solved lattice models in statistical mechanics [@bbook]. Chief among these models is Baxter’s eight-vertex model, which exhibits continuously varying critical exponents [@b1]. Such exact results provide valuable insights into the key theoretical developments of universality, renormalisation and scaling. The eight-vertex model is equivalent (see Figs. 1 and 2) to two Ising models coupled together by four-spin interactions [@w; @kw]. From [@bbook; @b1; @w; @kw] the singular part of the bulk free energy $f_b$ scales as $f_b \sim |t|^{\pi/\mu}$ as $t \rightarrow 0$. Here $t$ vanishes linearly with $T - T_c$, where $T_c$ is the critical temperature. The variable $\mu$ measures the strength of the four-spin interaction $M$ via $$\begin{aligned}
\exp(2 M) = \tan (\mu/2).\end{aligned}$$ When $\mu = \pi/m$, where $m$ is an even integer, the critical behaviour is modified to $f_b \sim |t|^{\pi/\mu} \log |t|$. This is the case in the Ising limit, where $\mu = \pi/2$. The critical exponent describing the divergence of the bulk specific heat, $C_b\sim |t|^{-\alpha_b}$ as $t \rightarrow 0$, is given by $
\alpha_b = 2 - \pi/\mu,
$ with $\alpha_b = 0$ (logarithmic) for the Ising model.
A significant test of the scaling relations between critical exponents was given by Johnson, Krinsky and McCoy [@jkm], who derived the correlation length exponent $\nu = \frac{\pi}{2 \mu}$ for the eight-vertex model. Together with Baxter’s result for $\alpha_b$ this confirmed the validity of the bulk scaling law [@bbook; @bin; @diehl] $
\alpha_b = 2 - 2 \nu.
$ However, the situation is not so satisfactory for the [*surface*]{} critical behaviour [@bin; @diehl], as the eight-vertex model has not been solved for [*open*]{} boundary conditions as in Fig.2. Whereas integrability in the bulk is governed by solutions of the Yang-Baxter equation [@mcg; @y; @b1], integrability in the presence of boundaries is governed by solutions of both the Yang-Baxter and reflection equations [@c; @s]. $K$-matrices satisfying the reflection equations have been found for the eight-vertex model [@hy; @cg; @vg; @ik], but the diagonalisation of the transfer matrix remains a formidable problem. Here we nevertheless derive two surface critical exponents of the eight-vertex model, allowing a direct test of the proposed scaling relations between bulk and surface critical exponents [@bin; @diehl; @ws].
The relation between the bulk Boltzmann weights $a,b,c,d$ of the eight-vertex model and the Ising couplings $K,L,M$ is depicted in Fig.1. These weights are given by [@bbook] $$\begin{aligned}
a(u)&=&\rho_0\, \theta_4(\lambda) \theta_4(u) \theta_1(\lambda-u),
\nonumber\\
b(u)&=&\rho_0\, \theta_4(\lambda) \theta_1(u) \theta_4(\lambda-u),
\nonumber\\
c(u)&=&\rho_0\, \theta_1(\lambda) \theta_4(u) \theta_4(\lambda-u),
\nonumber\\
d(u)&=&\rho_0\, \theta_1(\lambda) \theta_1(u) \theta_1(\lambda-u),
\label{abcd}\end{aligned}$$ where $\rho_0$ is a normalization factor. Here $\theta_1(u)$ and $\theta_4(u)$ are the elliptic theta functions, $$\begin{aligned}
\theta_1(u)&=& 2 \, q^{1/4} \sinh \frac{\pi u}{2 I}
\prod_{n=1}^{\infty} \left(
1-2 q^{2n} \cosh \frac{\pi u}{I}+q^{4n} \right) (1-q^{2n}), \nonumber \\
\theta_4(u)&=&\prod_{n=1}^{\infty} \left(
1-2 q^{2n-1} \cosh \frac{\pi u}{I} +q^{4n-2}\right) (1-q^{2n}),\nonumber\end{aligned}$$ of nome $q=\exp(-\pi I'/I)$, where $I$ and $I'$ are the half-period magnitudes. In the principal regime, $0 < u < \lambda$, with $0 < \lambda < I'$ and $0 < q < 1$. Here $q \rightarrow 1$ at criticality. In terms of the vertex weights, the bulk Ising couplings are given by $$\begin{aligned}
\exp(4 K) &=& \frac{a c}{b d} =
\left[\frac{\theta_4(u)}{\theta_1(u)}\right]^2, \label{defk}\\
\exp(4 L) &=& \frac{a d}{b c} =
\left[\frac{\theta_1(\lambda-u)}{\theta_4(\lambda-u)}\right]^2, \\
\exp(4 M) &=& \frac{a b}{c d} =
\left[\frac{\theta_4(\lambda)}{\theta_1(\lambda)}\right]^2.\end{aligned}$$ In the Ising limit, $M=0$ when $\lambda = \frac{1}{2} I'$, with the spectral variable $u$ controlling the anisotropy of the Ising couplings $K$ and $L$.
For the lattice orientation of Fig. 2, the integrable boundary vertex weights can be written down from the entries of the $K$-matrix. Now for the eight-vertex model this reflection matrix is of the general form $K^-(u) = K(u;\xi_-,\eta_-,\tau_-)$, with $$\begin{aligned}
K(u;\xi,\eta,\tau)&=& \mbox{\small $\pmatrix{K_{11}(u)&K_{12}(u) \cr
K_{21}(u)&K_{22}(u)}$}\end{aligned}$$ where [@ik] $$\begin{aligned}
K_{11} &=& \rho_s\frac{\theta_1(\xi-u)}{\theta_4(\xi-u)},\\
K_{22} &=& \rho_s\frac{\theta_1(\xi+u)}{\theta_4(\xi+u)},\\
K_{12} &=&\rho_s\; \eta\; \theta_4^2(\xi) \frac{\theta_1(2u)}{\theta_4(2u)}
\frac{\left\{ \tau \left[ \theta_4^2(u) + \theta_1^2(u) \right] -
\theta_1^2(u) + \theta_4^2(u) \right\}}
{\theta_4^2(\xi) \theta_4^2(u) - \theta_1^2(\xi) \theta_1^2(u)},\\
K_{21} &=&\rho_s\; \eta\; \theta_4^2(\xi) \frac{\theta_1(2u)}{\theta_4(2u)}
\frac{\left\{ \tau \left[ \theta_4^2(u) + \theta_1^2(u) \right] +
\theta_1^2(u) - \theta_4^2(u) \right\}}
{\theta_4^2(\xi) \theta_4^2(u) - \theta_1^2(\xi) \theta_1^2(u)}.
\label{kel}\end{aligned}$$ Here $\rho_s$ is a normalization factor and $\xi, \eta, \tau$ are arbitrary parameters. In principle, these three parameters are related to the surface couplings. We argue that the variable $\xi$ controls the strength of the Ising surface coupling $K_s$. Similar to the bulk case, we see from Fig. 1 that $\exp(4K_s)$ is given by a ratio of the boundary weights $r_{ij}$, which in turn are related to the $K$-matrix elements [@yb], $$\begin{aligned}
\exp(4K_s) = \frac{r_{11} r_{22}}{r_{12} r_{21}} =
\frac{K_{11}(u/2) K_{22}(u/2)} {K_{12}(u/2) K_{21}(u/2)}.\label{Ks}\end{aligned}$$ The particular choice $\tau =0$ and $\xi = \frac{1}{2} I'$ simplifies to $$\begin{aligned}
\exp(4K_s) = - \frac{1}{\eta^2}
\left[\frac{\theta_4(u)}{\theta_1(u)}\right]^2.\end{aligned}$$ Comparison with the bulk parametrisation (\[defk\]) implies that the further choice of $\eta^2 = -1$ leads to $K=K_s$, i.e. equal bulk and surface couplings in the Ising spin formulation. These particular values of $\eta$ and $\tau$ can be chosen for all $\xi$, since the surface coupling $K_s$ can be clearly set to be independent of $\eta$ and $\tau$, which can be seen from the product $r_{11} r_{22}$.
The surface free energy can be obtained by applying the inversion relation method, which is known to give the correct bulk free energy of the eight-vertex model (see, e.g. §13.7 of Ref. [@bbook]). By using the fusion procedure, the transfer matrix of the eight-vertex model with boundaries described by $K^\pm$-matrices has recently been found to satisfy a group of functional relations [@z1; @z2]. Ignoring the finite-size corrections, which are of no relevance here, the relations give the desired crossing-unitarity relation for the transfer matrix eigenvalues [@foot1], $$\begin{aligned}
\Lambda(u) \Lambda(u+\lambda)
&=&{\omega_+(u)\omega_-(u) }\rho^{2N}(u).
\label{inv-E}\end{aligned}$$ The factor $$\begin{aligned}
\rho(u)=\displaystyle{\theta_1(\lambda-u)\theta_1(\lambda+u)\over
\theta_1(\lambda)\theta_1(\lambda)}\end{aligned}$$ is a bulk contribution whereas the product $\omega_+(u)\omega_-(u)$ is a surface contribution, with [@z1; @z2] $$\begin{aligned}
\omega_+(u)&=& K^+_{11}(u)b(-2u+\lambda)K^+_{22}(u+\lambda)
+K^+_{12}(u)d(-2u+\lambda)K^+_{12}(u+\lambda) \nonumber \\
&& -K^+_{21}(u)a(-2u+\lambda)K^+_{12}(u+\lambda)
-K^+_{22}(u)c(-2u+\lambda)K^+_{22}(u+\lambda),\\
\omega_-(u)&=&K^-_{21}(u+\lambda)d(2u+\lambda)K^-_{21}(u)
+K^-_{22}(u+\lambda)b(2u+\lambda)K^-_{11}(u)\nonumber \\
&&-K^-_{11}(u+\lambda)c(2u+\lambda)K^-_{11}(u)
-K^-_{12}(u+\lambda)a(2u+\lambda)K^-_{21}(u).\end{aligned}$$ Here $K^+(u)$ is the transpose of $K^-(-u+\lambda)$ with $\xi_-$ replaced by $\xi_+$, etc.
The bulk and surface free energies must both satisfy the crossing-unitarity relation (\[inv-E\]). The surface energy can be separated from the bulk energy. As we are only predominately interested here in the surface critical behaviour rather than the precise form of the surface energy, we consider only the diagonal elements of the $K$-matrix. These terms are sufficient to extract the critical exponents and physically we do not anticipate any change in the critical behaviour arising from the off-diagonal terms [@foot2]. Define $\Lambda_b = \kappa_b^{2 N}$ and $\Lambda_s = \kappa_s$, then the bulk and surface free energies per site are defined by $f_b(u)=-\log \kappa_b(u)$ and $f_s(u) = -\log \kappa_s(u)$. From (13)-(16) we have $$\begin{aligned}
\kappa_b(u)\kappa_b(u+\lambda)&=&\rho(u)
\label{inv-b}\end{aligned}$$ for the bulk and $$\begin{aligned}
\kappa_s(u)\kappa_s(u+\lambda)&=&{
\theta_1(\xi_--u)\theta_1(\xi_-+u)
\theta_1(\xi_+-u)\theta_1(\xi_++u)\over \theta_1^4(\lambda)}
{\theta_1(2\lambda-2u)\theta_1(2\lambda+2u)
\over \theta_1^2(2\lambda)}\label{inv-s}\end{aligned}$$ for the surface.
We obtain the solution of (\[inv-s\]) for $\kappa_s(u)$ by applying the inversion relation method [@bbook]. Let us first recall the derivation of $\kappa_b(u)$ from (\[inv-b\]). It is convenient to introduce the variables $$\begin{aligned}
x=\exp\left(-\pi\lambda/2 I\right)
\quad\mbox{and}\quad w=\exp\left(-\pi u/I\right).\end{aligned}$$ To obtain $f_b(w)$ the argument is to assume that $\kappa_b(w)$ is analytic and nonzero in the annulus $x^2\le w\le 1$, allowing the Laurent expansion of $f_b(w)$, $$\begin{aligned}
\log \kappa_b(w)=\sum_{n=-\infty}^{\infty} c_n w^n\;.\end{aligned}$$ Inserting this expansion into the logarithm of both sides of (\[inv-b\]) and equating the coefficients of powers of $w$ then gives $$\begin{aligned}
f_b(w)=-\sum_{n=1}^{\infty}
{(x^{2n}+q^{2n}x^{-2n})(1-w^n)(1-x^{2n}w^{-n})\over
n(1+x^{2n})(1-q^{2n})}\;. \label{bfree}\end{aligned}$$ This is the desired result, from which the critical behaviour in the limit $q \rightarrow 1$ is extracted by use of the Poisson summation formula [@bbook]. In terms of the variable $\mu = \pi \lambda/I'$, where $I' \rightarrow \pi/2$ as $q \rightarrow 1$, it follows that $f_b \sim p^{\pi/\mu}$ as $p \rightarrow 0$, with $f_b \sim p^{\pi/\mu}\log p$ if ${\pi/\mu}$ is an even integer. Here the conjugate nome $p=\exp(-2 \pi I/I')$ vanishes linearly with the deviation from criticality variable $t$ [@bbook].
We obtain the surface free energy by solving (\[inv-s\]) under the [*same*]{} analyticity assumptions as for the bulk case, together with the further assumption that $\kappa_s(w)$ is analytic and nonzero in the annulus $x<y_\pm<1$, where we have defined $
y_\pm=\exp\left(-\pi \xi_\pm/ 2 I\right).
$ In this way we arrive at the result $$\begin{aligned}
f_s(w,y_\pm)
&=&\sum_{n=1}^\infty
{(y_+^{2n}+q^{2n}y_+^{-2n}+y_-^{2n}+q^{2n}y_-^{-2n})(w^n+x^{2n}w^{-n})\over
n(1+x^{2n})(1-q^{2n})} \nonumber \\
&&-\sum_{n=1}^\infty{(x^{4n}+q^{2n}x^{-4n})
(1-w^{2n})(1-x^{4n}w^{-2n})\over n(1+x^{4n})(1-q^{2n})}.
\label{sfree}\end{aligned}$$ Applying the Poisson summation formula leads to a series for $f_s$ in powers of the nome $p$.
The phenomenology of critical behaviour at a surface is well developed [@bin; @diehl; @ws]. In this case two surface critical exponents can be obtained from the surface free energy; one from the surface specific heat, $C_s \sim |t|^{-\alpha_s}$, and the other from the “local" specific heat in the boundary layer, $C_1 \sim |t|^{-\alpha_1}$. Here the corresponding surface internal energy is given by $$\begin{aligned}
e_s(p)\sim {\partial f_s(u,\xi_\pm)\over\partial p} +e_1(p),\end{aligned}$$ where $e_1(p)$ is the first layer internal energy, $$\begin{aligned}
e_1(p)\sim{\partial f_s(u,\xi_\pm)\over\partial{\xi_\pm}}.\end{aligned}$$ The related specific heats follow as $$\begin{aligned}
C_s \sim {\partial e_s\over\partial p}
\quad \mbox{and} \quad
C_1 \sim {\partial e_1\over\partial p}.\end{aligned}$$ These definitions follow from [@bin; @diehl; @ws] with the identifications $p\sim t$ and $\xi_\pm\sim K_s$. From (\[sfree\]) we find that as $p\to 0$ $$\begin{aligned}
e_s(p)\sim p^{{\pi\over 2\mu}-1} \quad \mbox{and} \quad
e_1(p)\sim p^{\pi/\mu}.\end{aligned}$$ As for the bulk case, a logarithmic factor appears if ${\pi/\mu}$ is an even integer, with $$\begin{aligned}
e_s(p)\sim p^{{\pi\over 2\mu}-1}\log p \quad \mbox{and} \quad
e_1(p)\sim p^{\pi/\mu}\log p.\end{aligned}$$ This behaviour is to be compared at $\mu = \pi/2$ with the known Ising results where the logarithmic factor is observed [@mw; @ff; @ay; @r].
In summary, we have derived the exact critical surface exponents $$\begin{aligned}
\alpha_s=2-{\pi\over 2\mu} \quad \mbox{and} \quad
\alpha_1=1-{\pi\over\mu}\end{aligned}$$ for the eight-vertex model. At $\mu = \pi/2$, $\alpha_s=1$ (log) and $\alpha_1=-1$ (log), in agreement with the Ising results [@mw; @ff; @ay; @r]. Recalling the bulk exponents $\alpha_b = 2 - {\pi\over\mu}$ [@b1] and $\nu = {\pi\over 2\mu}$ [@jkm] we are thus able to provide a significant confirmation of the scaling relations [@bin; @diehl; @ws] $$\begin{aligned}
\alpha_s = \alpha_b + \nu \quad \mbox{and} \quad \alpha_1 = \alpha_b -1\end{aligned}$$ between bulk and surface critical exponents. The derivation of other surface exponents awaits the diagonalisation of the transfer matrix, which remains a formidable open problem.
We have found that the surface free energy scales as $f_s \sim p^{\pi/2\mu}$ as $p\rightarrow 0$. It is interesting to observe that this is in agreement with the scaling behaviour of the interfacial tension [@bint]. However, these two quantities differ away from criticality.
Finally we note that, in the same spirit as this work, the critical magnetic surface exponent $\delta_s$ of the two-dimensional Ising model in a magnetic field [@bin; @diehl] should also be obtainable from the dilute $A_3$ model [@wns; @roche], which is known to lie in the same universality class as the Ising model in a magnetic field [@wns]. However, unlike for the eight-vertex model, where we have been able to disentangle the critical behaviour from the known solution of the reflection equations, the reflection matrices for the dilute $A_3$ model are yet to be determined.
The authors thank Professor R. J. Baxter for his interest and encouragement in this work and the Australian Research Council for support.
On leave of absence from Institute of Modern Physics, Northwest University, Xian 710069, China. R. J. Baxter, [*Exactly Solved Models in Statistical Mechanics*]{} (Academic, London, 1982). R. J. Baxter, Phys. Rev. Lett. [**26**]{}, 832 (1971); Ann. Phys. (N.Y.) [**70**]{}, 193 (1972). F. Y. Wu, Phys. Rev. B [**4**]{}, 2312 (1971). L. P. Kadanoff and F. J. Wegner, Phys. Rev. B [**4**]{}, 3989 (1971). J. D. Johnson, S. Krinsky and B. M. McCoy, Phys. Rev. Lett. [**29**]{}, 492 (1972); Phys. Rev. A [**8**]{}, 2526 (1973). K. Binder, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz, (Academic, London, 1983), Vol. 8, p 1. H. W. Diehl, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz, (Academic, London, 1986), Vol. 10, p 75. J. B. McGuire, J. Math. Phys. [**5**]{}, 622 (1964). C. N. Yang, Phys. Rev. Lett. [**19**]{}, 1312 (1967); Phys. Rev. [**168**]{}, 1920 (1968). I. V. Cherednik, Theor. Math. Phys. [**61**]{}, 977 (1984). E. K. Sklyanin, J. Phys. A [**21**]{}, 2375 (1988). B. Y. Hou and R. H. Yue, Phys. Lett. A [**183**]{}, 169 (1993); B. Y. Hou, K. J. Shi, H. Fan and Z. X. Yang, Commun. Theor. Phys. [**23**]{}, 163 (1995). R. Cuerno and A. González-Ruiz, J. Phys. A [**26**]{}, L605 (1993). H. J. de Vega and A. González-Ruiz, J. Phys. A [**27**]{}, 6129 (1994). T. Inami and H. Konno, J. Phys. A [**27**]{}, L913 (1994). M. Wortis and N. M. ${\check {\rm S}}$vraki[$\acute {\rm c}$]{}, IEEE Trans. Mag. [**18**]{}, 721 (1982). C. M. Yung and M. T. Batchelor, Nucl. Phys. B [**435**]{}, 430 (1995). Y. K. Zhou, Nucl. Phys. B [**453**]{}, 619 (1985). Y. K. Zhou, [*Row transfer matrix functional relations for Baxter’s eight-vertex and six-vertex models with open boundaries via more general reflection matrices*]{}, hep-th/9510095, Nucl. Phys. B (in press). Crossing and unitarity relations for boundary $K$-matrices have already been argued in terms of field theory with a boundary, see S. Ghoshal and A. Zamolodchikov, Int. J. Mod. Phys. A [**21**]{}, 3841 (1994). However, it is not clear how to write down the boundary crossing and unitarity relations for an off-critical model according to this theory. The transfer matrix fusion procedure gives a consistent way of finding the crossing-unitarity relation. The precise form of the surface free energy for the geometry depicted in Fig. 2 can be obtained by considering all terms and introducing alternating inhomogeneities as in Ref. [@yb]. Full details of this calculation will be given elsewhere. B. M. McCoy and T. T. Wu, Phys. Rev. [**162**]{}, 436 (1967). M. E. Fisher and A. E. Ferdinand, Phys. Rev. Lett. [**19**]{}, 169 (1967). H. Au Yang, J. Math. Phys. [**14**]{}, 937 (1973). P. Reed, J. Phys. A [**11**]{}, 137 (1978). R. J. Baxter, J. Stat. Phys. [**8**]{}, 25 (1973). S. O. Warnaar, B. Nienhuis and K. A. Seaton, Phys. Rev. Lett. [**69**]{}, 710 (1992); Int. J. Mod. Phys. [**7**]{}, 3727 (1993). Ph. Roche, Phys. Lett. B [**285**]{}, 49 (1992).
\#1\#2\#3\#4\#5
(30,33)(45,776) ( 45,795)[( 1,-1)[ 30]{}]{} ( 75,795)[(-1,-1)[ 30]{}]{} ( 43,777)[\#1]{} ( 56,763)[\#2]{} ( 69,777)[\#3]{} ( 56,789)[\#4]{}[\#5]{}
\#1\#2\#3\#4
(15,30)(105,713) (105,735)[( 1,-1)[ 15]{}]{} (120,720)[(-1,-1)[ 15]{}]{} (102,717.5)[\#1]{} (117,707)[\#2]{} (117,728)[\#3]{}[\#4]{}
++++ +[$-$]{}+[$-$]{} +[$-$]{}+ $a=A\;e^{K+L+M}$ ++ ++[$-$]{}[$-$]{} +[$-$]{}[$-$]{}+ ++[$-$]{} $b=A\;e^{-K-L+M}$ ++[$-$]{}+ +[$-$]{} +++ +[$-$]{}[$-$]{}[$-$]{} $c=A\;e^{K-L-M}$ + +++[$-$]{} +[$-$]{}++ +[$-$]{}[$-$]{} $d=A\;e^{-K+L-M}$ +++ $r_{11}=B\;e^{K_s+M^1_s+M^2_s}$ +[$-$]{}[$-$]{} ++ $r_{22}=B\;e^{K_s-M^1_s-M^2_s}$ +[$-$]{}+ +[$-$]{} $r_{12}=B\;e^{-K_s+M^1_s-M^2_s}$ ++[$-$]{} + $r_{21}=B\;e^{-K_s-M^1_s+M^2_s}$
(300,240)(105,540) (105,780)(60,0)[6]{} (105,720)(60,0)[6]{} (105,660)(60,0)[6]{} (105,600)(60,0)[6]{} (135,750)(60,0)[5]{} (135,690)(60,0)[5]{} (135,630)(60,0)[5]{} (135,801)(60,0)[5]{}[( 0,-1)[220]{}]{} (135,750)(0,-60)[3]{}[( 1, 0)[240]{}]{} (105,780)(9.1,0)[33]{}[( 1, 0)[8]{}]{} (105,660)(9.1,0)[33]{}[( 1, 0)[8]{}]{} (105,720)(9.1,0)[33]{}[( 1, 0)[8]{}]{} (105,600)(9.1,0)[33]{}[( 1, 0)[8]{}]{} (105,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (405,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (165,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (225,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (285,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (345,798)(0,-9.1)[24]{}[( 0,-1)[6]{}]{} (411,744)[$K_s$]{} (267,687)[$L$]{} (345,672)[$K$]{} (252,702)[$K$]{} (327,657)[$L$]{} (104,750)(6.36364,-6.36364)[27]{}[$.$]{}(104,750)(6.36364,6.36364)[8]{}[$.$]{}(104,690)(6.66667,6.66667)[17]{}[$.$]{}(104,690)(6.66667,-6.66667)[16]{}[$.$]{}(104,630)(6.42857,-6.42857)[8]{}[$.$]{}(104,630)(6.42857,6.42857)[27]{}[$.$]{}(374,600)(6.00000,6.00000)[6]{}[$.$]{} (374,600)(-6.00000,-6.00000)[3]{}[$.$]{} (404,630)(-6.31579,6.31579)[27]{}[$.$]{} (404,690)(-6.31579,6.31579)[18]{}[$.$]{} (404,690)(-6.31579,-6.31579)[18]{}[$.$]{} (404,750)(-6.31579,-6.31579)[27]{}[$.$]{} (344,750)(-6.31579,-6.31579)[27]{}[$.$]{} (344,750)(6.31579,6.31579)[8]{}[$.$]{} (284,750)(-6.31579,-6.31579)[27]{}[$.$]{} (284,750)(6.31579,6.31579)[8]{}[$.$]{} (404,750)(-6.31579,6.31579)[8]{}[$.$]{} (164,750)(6.36364,-6.36364)[27]{}[$.$]{} (224,750)(6.36364,-6.36364)[27]{}[$.$]{} (224,750)(-6.36364,6.36364)[8]{}[$.$]{} (164,750)(-6.36364,6.36364)[8]{}[$.$]{}
|
---
abstract: ' Examples of the Riccii-flat metrics associated with the equations of Navier-Stokes are constructed. Their properties are investigated.'
author:
- |
Valerii Dryuma[^1]\
[*Institute of Mathematics and Informatics, AS RM,*]{}\
[*5 Academiei Street, 2028 Kishinev, Moldova*]{},\
[*e-mail: valery@dryuma.com; valdryum@gmail.com; cainar@mail.md*]{}
date:
-
-
title: ' ON THE RICCI-FLAT METRIC FOR THE NAVIER-STOKES EQUATIONS'
---
=-5mm
Introduction
============
Properties of solutions of the Navier-Stokes equations to the incompressible fluid can be studied by geometric methods \[1-3\].
For this purpose we rewrite the $NS$- equations in equivalent form of conservation laws $$\label{law}
U_t+(U^2-\mu U_x+P)_x+(UV-\mu U_y)_y+(UW-\mu U_z)_z=0,$$$$
V_t+(UV-\mu V_x)_x+(V^2-\mu V_y+P)_y+(VW-\mu V_z)_x=0,$$$$
W_t+(UW-\mu W_x)_x+(VW-\mu W_y)_y+(W^2-\mu W_z+P)_z=0,$$$$
U_x+V_y+W_z=0,$$ where $U,V,W$ and $P$ are components of the velocity and the pressure of the fluid.
The system of equations (\[law\]) can be considered as conditions of equality to zero the Ricci tensor of 14- dimensional space $D^{14}$ in local coordinates\
$X=(x,y,z,t,\eta,\rho,m,u,v,w,p,\xi,\chi,n)=(\vec
x,t,\eta,\rho,m,\Psi_l)$ , $l=1...7$ endowed with the Riemann metric $$\label{metr}
^{14}ds^2=-2\Gamma^i_{jk}(\vec x,t)\Psi_idx^jdx^k+2 d\Psi_ldx^l.$$
The metric (\[metr\]) is the metric of the Riemann extension \[4\] of seven-dimensional space $D^7$ of affine connection in local coordinates $(x,y,z,t,\eta,\rho,m)$ with components of connection $\Gamma^i_{jk}(\vec x,t)$.
In explicit form it looks as follows $$\label{metr1}
^{14} {{\it ds}}^{2}=2\,{\it dx}\,{\it du}+2\,{\it dy}\,{\it
dv}+2\,{\it dz} \,{\it dw}+\left (-V(\vec x,t)v-W(\vec
x,t)w-U(\vec x,t)u\right ){{\it dt
}}^{2}\!+\!$$$$\!+\!\left(\!-\!u\left (U(\vec x,t)\right
)^{2}\!+\!uP (\vec x,t)\!+\!u\mu\,U_x(\vec x,t)\!-\!vU(\vec x,t)V
(\vec x,t)\!-\!U(\vec x,t)p \right
)d{\eta}^{2}\!$$$$+\left(\!v\mu\,U_y(\vec x,t)\!-\! wU(\vec
x,t)W(\vec x,t)\!+\!w\mu\,U_z(\vec x,t) \right
)d{\eta}^{2}\!+\!2\,{d}^{2}\eta\,\xi\!+\!$$$$\!+\!\left (-uU(\vec
x,t)V(\vec x,t)\!+\! vP(\vec x,t)\!-\!V(\vec
x,t)p\!+\!u\mu\,V_x(\vec x,t )\!-\!wV(\vec x,t)W(\vec
x,t)\right)d{\rho}^{2}+$$$$+\left(\!v\mu\,V_y(\vec x,t
)+\!w\mu\,V_z(\vec x,t)\!-\!v\left (V(\vec x,t) \right )^{2}\right
)d{\rho}^{2}\!+\!2\,{d}^{2}\rho\,\chi\!+\!$$$$\!+\!\left (wP(\vec
x,t)\!-\!W(\vec x,t)p\!-\!w\left (W(\vec x,t)\right
)^{2}\!+\!w\mu\,W_z(\vec x,t)\!+\!v\mu\,W_y(\vec x,t)\right){{\it
dm}}^{2}\!+$$$$+\!\left(-vV(\vec x,t)W(\vec
x,t)\!+\!u\mu\,W_x(\vec x,t)\!-\!uU(\vec x,t)W(\vec x,t)\right
){{\it dm}}^{2}\!+\!2\,{\it dm}\,{\it dn}+\!2\,{\it dt}\,{\it
dp}\!.$$
The main property of the space with the metric (\[metr1\]) lies in the fact that it is a Ricci-flat if the functions $U,V,W$ and $P$ satisfy the NS-equations (\[law\]).
Despite the fact that all scalar invariants of the space $D^{14}$ are equal to zero its geometric properties can be studied with the help of equations of geodesic and corresponding invariant differential operators.
Geodesic
========
The complete system of geodesics of the metric (\[metr\]) consists of two parts $$\ddot x^k+\Gamma^k_{ij}\dot x^i \dot x^j=0,\quad
\frac{\delta^2 \Psi_k}{ds^2}+R^l_{k j
i}\dot x^j \dot x^i \Psi_l=0,$$ where $$\frac{\delta \Psi_k}{ds}=\dot \Psi_k-\Gamma^l_{jk}\Psi_l \dot x^j.$$
In considered case the first group of equations is $$\label{geod0}
\ddot x\!+\!1/2\,U(\vec x,t)\dot t^{2}\!+\!1/2\,\dot \eta U(\vec x,t)^{2}\!-\!1/2\,\dot \eta^{2}\mu \,U_x(\vec x,t)\!-\!1/2\,\dot \eta^{2}P(\vec x,t)+$$$$+1/2\,\dot \rho^{2}U(\vec x,t)V(\vec x,t)\!-\!1/2\,\dot \rho
^{2}\mu\,V_x(\vec
x,t)\!+\!1/2\,\dot m^{2}U(\vec
x,t)W(\vec x,t)\!-$$$$-\!1/2\,\dot m(s)\mu\,W_x(\vec x,t)\!=0,
$$
$$ \ddot y+1/2\,V(\vec x,t)\dot t^{2}+1/2\,\dot \eta(s)^{2}U(\vec x,t) V(\vec x,t)-1/2\,\dot \eta^{2}\mu\,U_y(\vec x,t)+1/2\,\dot \rho^{2}V(\vec x,t)^{2}-$$$$-1/2\,\dot \rho^{2}\mu\,V_y(\vec x,t)-1/2 \,\dot \rho(s)P(\vec x,t)+1/2\,\dot m^{2}V(\vec x,t)W(\vec x,t)-$$$$-1/2\,\dot m^{2}\mu\,W_y(\vec x,t)=0,
$$
$$\ddot z\!+\!1/2\,W(\vec x,t)\dot t^{2}\!+\!1/2\,\dot \eta(s)^{2}U(\vec x,t) W(\vec x,t)\!-\!1/2\,\dot \eta^{2}\mu\,U_z(\vec x,t)\!+\!1/2\,\dot \rho^{2}V(\vec x,t)W(\vec x,t)\!-\!$$$$\!-\!1/2\,\dot \rho^{2}\mu\,V_z(\vec x,t)\!+\!1/2\,\dot m^{2}W(\vec x,t)^{2}\!-\!1/2\,\dot m^{2}\mu\,W_z(\vec x,t)\!-\!1/2\,\dot m^{2}P(\vec x,t) =0,
$$
$$ \ddot t+1/2\,U(x,y,z,t)\dot \eta^{2}+1/2\,V(x,y,z,t)\dot \rho^{2}+1/2\,W(x,y,z,t)\dot m^{2}=0,
$$
$$
\ddot \eta=0,~~ \ddot \rho(s)=0,~~
\ddot m(s)=0.$$
In particular case of $2D$-potential flow $$U=\phi_y,~ V=-\phi_x,~ W=0, ~P=Q(x,y,t)$$ the system (\[geod0\]) takes the form $$2\,\ddot x+\phi_y \dot t^{2}+{{\it
\alpha1}}^{2}\phi_y^
{2}-{{\it \alpha_1}}^{2}\mu\,\phi_{xy}-{{\it \alpha_1}}^{2}Q-{{\it \alpha_2}}^{2}\phi_y\phi_x+{{\it \alpha_2}}^{2}\mu\,\phi_{xx}=0,$$ $$2\,\ddot y-\phi_x \dot t^{2}-{{\it
\alpha_1}}^{2}\phi_y\phi_x-{{\it \alpha_1}}^{2}\mu\,\phi_{yy}+{{\it \alpha_2}}^{2}
\phi_x^{2}+{{\it
\alpha_2}}^{2}\mu\,\phi_{xy}-{{\it \alpha_2}}^{2}Q=0,$$$$2\,\ddot z-Q{{\it \alpha_3}}^{2}=0,~~
2\,\ddot t+\phi_y{{\it \alpha_1}}^{2}-\phi_x{{\it \alpha_2}}^{2}=0,$$ $$\eta(s)=\alpha_1s,~\rho(s)=\alpha_2s,~ m(s)=\alpha_3s.$$ [**Remark.**]{} The coefficients of the system (\[geod0\]) are the components $\Gamma^k_{ij}$ of affine connection of the seven-dimensional manifold in the local coordinates $(\vec
x,t,\eta,\rho,m)$.
It is a Ricci-flat $$R_{ij}=\partial_k
\Gamma^k_{ij}-\partial_i
\Gamma^k_{kj}+\Gamma^k_{kl}\Gamma^l_{ij}-\Gamma^k_{im}\Gamma^m_{kj}=0$$ on solutions of the NS-equations (\[law\]) and its properties can be studied independently of the enclosing $14$-dimensional Riemann space with the metric (\[metr1\]).
Linear part of geodesic has the form of linear system of equations with variable coefficients $$\ddot \Psi_i=A^k_i \dot \Psi_k+B^k_i \Psi_k,$$ where $\Psi_k=[u,v,w,p,\xi,\chi,n]$ is vector-functions and $ A^k_i= A^k_i(\vec x,t)$ and $ B^k_i=B^k_i(\vec x,t)$ are the matrix-functions depending on coordinates $X^a=(\vec x,t)$ .
Properties of the seven-dimension of the space of affine connection can be studied with the help of solutions of a system of linear partial differential equations with respect to the components of the vector of motions $\omega^k (\vec x, t,\eta,\rho, m)$ of the space $$\partial^2_{bc}\omega^a+\omega^k\partial_k \Gamma^a_{bc}+\partial_b\omega^k \Gamma^a_{kc}+\partial_c\omega^k\Gamma^a_{bk}-\partial_k\omega^a\Gamma^k_{bc}=0.$$
Parameters of Beltrami
======================
The functions of coordinates $\psi(x^k)$ which defined by the formulas $$\label{Lap-Bel2}
\Delta_2\psi=g^{ij}\left(\frac{\partial^2 \psi}{\partial x^i
\partial x^j}-\Gamma^k_{ij}\frac{\partial\psi}{\partial
x^k}\right),$$ and $$\label{Lap-Bel1}
\Delta_1\psi=g^{ij}\frac{\partial \psi}{\partial x^i}
\frac{\partial \psi}{\partial x^j}$$ are the invariants of the space.
Solutions of the equations $$\Delta_2\phi=0,~ \Delta_1 \phi=0$$ can be used to the study properties of solutions of the $NS$ -equations.
As an example, consider the two-dimensional potential flow of fluid.
The metric of associated Riemann space in this case has the form $$\label{poten}
{{\it ds}}^{2}=2\,{\it dx}\,{\it du}+2\,{\it dy}\,{\it dv}+2\,{\it dz}
\,{\it dw}+\left (-\phi_y
u+\phi_x v
\right ){{\it dt}}^{2}+2\,{\it dt}\,{\it dp}+$$$$+\left (-u\phi_y^{2}+u\mu\,\phi_{xy}+uQ(x,y,t)+v\phi_y \phi_x+v\mu\,\phi_{yy}-\phi_y p\right )
{{\it d\eta}}^{2}+$$$$+2\,{\it d\eta}\,{\it d\xi}+\left (u\phi_y \phi_x-u\mu\,\phi_{xx}-v\phi_x^{2}-v
\mu\,\phi_{xy}+vQ(x,y,t
)+\phi_x p\right ){{
\it d\rho}}^{2}+$$$$+2\,{\it d\rho}\,{\it d\chi}+wQ(x,y,t){{\it dm}}^{2}+2\,{
\it dm}\,{\it dn}.$$
Components of the Ricci-tensor of the metric (\[poten\]) are equal to zero $R_{\eta \eta}=0$, $R_{\rho \rho}=0$ on solutions of $2DNS$-equations $$\phi_y \phi_{xy}-\mu\,\phi_{xxy}-Q_x-\phi_{yy}\phi_x-\mu\,\phi_{yyy}+
\phi_{yt}=0,$$ $$-\phi_y \phi_{xx}+\mu\,\phi_{xxx}+\mu\,\phi_{xyy}-Q_y-\phi_{xt}+\phi_{xy}\phi_x=0,$$ where the function $\phi(x,y,t)$ satisfies the condition of compatibility $$\label{flow}
(\phi_{xx}+\phi_{yy})_t+\phi_y(\phi_{xx}+\phi_{yy})_x-\phi_x(\phi_{xx}+\phi_{yy})_y-\mu \Delta(\phi_{xx}+\phi_{yy})=0.$$
Here is an example of the solution of equation $$\label{Lap-Bel3}
g^{ij}\frac{\partial \psi}{\partial x^i}
\frac{\partial \psi}{\partial x^j}-1=0.$$
As is known \[5\] with the help of solutions equation (\[Lap-Bel3\]) can be studied geodesics of the metric for arbitrary Riemann space.
For the metric (\[poten\]) the equation (\[Lap-Bel3\]) takes the form $$\label{Roma1}
2\,\psi_x\psi_u+2\,\psi_y\psi_v+2
\,\psi_z\psi_w+2\,\psi_t\psi_p+2\,\psi_\eta\psi_\xi+2\,\psi_\rho\psi_\chi+
2\,\psi_m\psi_n+\psi_p^{2}\phi_yu-$$$$-\psi_p^{2}\phi_x v+\psi_\xi^{2}u\phi_y
^{2}-\psi_{\xi}^{2}u\mu\,\phi_{xy}-\psi_{\xi}^{2}uQ-\psi_\xi^{2}v\phi_y\phi_x-\psi_\xi^{2}v\mu\,\phi_{yy}+\psi_\xi^{2}\phi_yp-$$
$$-\psi_\chi^{2}u\phi_y\phi_x+\psi_\chi^{2}u\mu\,\phi_{xx}+\psi_\chi^{2}v\phi_x^{2}+\psi_\chi^{2}v\mu\,\phi_{xy}-\psi_\chi^{2}vQ-\psi_\chi^{2
}\phi_xp-wQ\psi_n^{2}-1=0.$$
After separation of variables in the equation (\[Roma1\]) we find $$\psi(x,y,z,t,\eta,\rho,m,u,v,w,p,\xi,\chi,n)={\it c}_{{3}}z+{\it c
}_{{5}}\eta+{\it c}_{{6}}\rho+{\it c}_{{7}}m+{\it c}_{{12}}\xi+$$$$+{
\it c}_{{13}}\chi+{\it c}_{{14}}n+F(x,y,t,u,v,w,p)$$ where the function $F(x,y,t,u,v,w,p)$ satisfies the equation $$\label{Roma2}
2\,F_x F_u+2\,F_yF_v+2\,{
\it c}_{{3}}F_w+2\,F_tF_p+2\,{
\it c}_{{5}}{\it c}_{{12}}+2\,{\it c}_{{6}}{\it c}_{{13}}+2\,{
\it c}_{{7}}{\it c}_{{14}}+$$$$+F_p^{2}\phi_yu-F_p^{2}\phi_xv+{{\it c}_{{12}}}^{2}u\phi_y^{2}-{{\it c}_{{12}}}^{2}
u\mu\,\phi_{xy}-{{\it
c}_{{12}}}^{2}uQ-{{\it c}_{{12}}}^{2}v\phi_y\phi_x-$$$$-{{\it c}_{{12}}}^{2}v\mu\,\phi_{yy}+{{\it c}_{{12}}}^{2}\phi_yp-{{\it c}_{{13}}}^{2}u
\phi_y\phi_x+{{\it c}_{{13}}}^{2}u\mu\,\phi_{xx}+{{\it c}_{{13}}}^{2}v
\phi_x^{2}+$$$$+{{\it
c}_{{13}}}^{2}v\mu\,
\phi_{xy}-{{\it c}_{{13}}}^{2}vQ-{{\it c}_{{13}}}^{2}
\phi_xp-wQ{{
\it c}_{{14}}}^{2}-1=0.$$
Using presentation the function $F(x,y,t,u,v,w,p)$ in the form $$F(x,y,t,u,v,w,p)=A(x,y,t)p+uB(x,y,t),
{\it c}_{{14}}=0,
{\it c}_{{13}}=0,
{\it c}_{{12}}=1/2\,{{\it c}_{{5}}}^{-1}$$ we get over determinant system of equations to the functions $A(x,y,t)$, $B(x,y,t)$ $$\label{Roma3}
8\,B{{\it c}_{{5}}}^{2}B_x
+8\,A{{\it c}_{{5}}}^{2}B_t+4\,A^{2}\phi_y{{\it c}_{{5}}}^{2}+\phi_y^{2}-\mu\,\phi_{xy}-Q=0,
$$
$$
8\,B{{\it c}_{{5}}}^{2}A_x+8\,A{{\it c}_{{5}}}^{2}A_t+\phi_y=0,~~
-4\,A^{2}
\phi_x{{\it c}_{{5}}}^{2}-\phi_y\phi_x-\mu\,\phi_{yy}=0.$$
From conditions of compatibility we find that the system (\[Roma3\]) has solutions if the functions $\phi(x,y,t)$, $Q(x,y,t)$ satisfy to the relation $$\label{euler}
H(\phi,\phi_x,\phi_y,\phi_t,...Q)=0$$ containing $275$ summands.
In particular, for the Euler system of equations, ($\mu=0$) the relation (\[euler\]) takes the form $$\phi_{yt}\phi_{xy}^{2}\phi_y-4\,\phi_{yt}\phi_y\phi_{xy}
\phi_{xyt}+2\,\phi_{y}\phi_{xxy}\phi_{yt}^{2}-4
\,\phi_{yt}\phi_y^{2}\phi_{xxy}-$$$$-3\,
\phi_{xy}^{2}\phi_y^{2}+4\,
\phi_y^{2}\phi_{xy}\phi_{xyt}+2\,\phi_y^{3}\phi_{xxy}+2\,\phi_y\phi_{xy}^{2}\phi_{ytt}-Q\phi_{xy}^{3}
=0.$$
[99]{} V.Dryuma: Applications of Riemann geometry in theory of the Navier-Stokes equations, VI-th International Conference “Solitons, Collapses and Turbulence: Achievements, Developments and Perspectives ” Russia, Novosibirsk, Akademgorodok, 4-8 June 2012, Abstract, p. 51–52. V.Dryuma: On spaces related to the Navier-Stokes equation. *Buletinul Academiei de Stiinte a Republicii Moldova*, **3** (2010) 107–110.. V.Dryuma: Four-dimensional the Ricci-flat space defined by the KP-equation. *Buletinul Academiei de Stiinte a Republicii Moldova*, **3** (2008) 108–111. V.Dryuma : The Riemann extensions in theory of differential equations and their applcations. *Matematicheskaya fizika, analiz, geometriya*, **10** (2003) 307–325. Dino Boccaletti, Francesco Catoni, Roberto Cannata, Paolo Zampetti: Integrating the geodesic equations in the Schwarzschild and Kerr space-times using Beltrami geometrical method. *arXiv:gr-qc/0502051 v2 3 Apr*, **11** (2008) 1–12.
[^1]: Work supported in part by Grant RFFI, Russia-Moldova
|
---
abstract: 'Double electron capture is a rare nuclear decay process in which two orbital electrons are captured simultaneously in the same nucleus. Measurement of its two-neutrino mode would provide a new reference for the calculation of nuclear matrix elements whereas observation of its neutrinoless mode would demonstrate lepton number violation. A search for two-neutrino double electron capture on $^{124}$Xe is performed using 165.9 days of data collected with the XMASS-I liquid xenon detector. No significant excess above background was observed and we set a lower limit on the half-life as $4.7 \times 10^{21}$ years at 90% confidence level. The obtained limit has ruled out parts of some theoretical expectations. We obtain a lower limit on the $^{126}$Xe two-neutrino double electron capture half-life of $4.3 \times 10^{21}$ years at 90% confidence level as well.'
address:
- 'XMASS Collaboration$^*$'
- 'Kamioka Observatory, Institute for Cosmic Ray Research, the University of Tokyo, Higashi-Mozumi, Kamioka, Hida, Gifu, 506-1205, Japan'
- 'Center of Underground Physics, Institute for Basic Science, 70 Yuseong-daero 1689-gil, Yuseong-gu, Daejeon, 305-811, South Korea'
- 'Information and multimedia center, Gifu University, Gifu 501-1193, Japan'
- 'Kavli Institute for the Physics and Mathematics of the Universe (WPI), the University of Tokyo, Kashiwa, Chiba, 277-8582, Japan'
- 'Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8602, Japan'
- 'Department of Physics, Kobe University, Kobe, Hyogo 657-8501, Japan'
- 'Korea Research Institute of Standards and Science, Daejeon 305-340, South Korea'
- 'Department of Physics, Miyagi University of Education, Sendai, Miyagi 980-0845, Japan'
- 'Solar Terrestrial Environment Laboratory, Nagoya University, Nagoya, Aichi 464-8602, Japan'
- 'Department of Physics, Tokai University, Hiratsuka, Kanagawa 259-1292, Japan'
- 'Department of Physics, Faculty of Engineering, Yokohama National University, Yokohama, Kanagawa 240-8501, Japan'
author:
- 'K. Abe'
- 'K. Hiraide'
- 'K. Ichimura'
- 'Y. Kishimoto'
- 'K. Kobayashi'
- 'M. Kobayashi'
- 'S. Moriyama'
- 'K. Nakagawa'
- 'M. Nakahata'
- 'T. Norita'
- 'H. Ogawa'
- 'H. Sekiya'
- 'O. Takachio'
- 'A. Takeda'
- 'M. Yamashita'
- 'B. S. Yang'
- 'N. Y. Kim'
- 'Y. D. Kim'
- 'S. Tasaka'
- 'J. Liu'
- 'K. Martens'
- 'Y. Suzuki'
- 'R. Fujita'
- 'K. Hosokawa'
- 'K. Miuchi'
- 'N. Oka'
- 'Y. Onishi'
- 'Y. Takeuchi'
- 'Y. H. Kim'
- 'J. S. Lee'
- 'K. B. Lee'
- 'M. K. Lee'
- 'Y. Fukuda'
- 'Y. Itow'
- 'R. Kegasa'
- 'K. Kobayashi'
- 'K. Masuda'
- 'H. Takiya'
- 'H. Uchida'
- 'K. Nishijima'
- 'K. Fujii'
- 'I. Murayama'
- 'S. Nakamura'
title: 'Search for two-neutrino double electron capture on $^{124}$Xe with the XMASS-I detector'
---
Double electron capture ,Neutrino ,Liquid xenon
Introduction
============
The observed baryon asymmetry in the Universe still proves to be a fundamental challenge, calling for physics beyond the standard model of particle physics. Lepton number violation involving Majorana neutrinos is one way to address this challenge in the context of leptogenesis [@Buchmuller:2005eh]. The most sensitive probe for lepton number violation is neutrinoless double beta decay ($0\nu \beta^{-} \beta^{-}$) $$(Z,A) \to (Z+2,A) +2e^{-} \ ,$$ where $Z$ and $A$ are the atomic number and atomic mass number of a given nucleus, respectively. Its inverse, neutrinoless double electron capture ($0\nu$ECEC), is also a lepton number violating process $$(Z,A) +2e^{-} \to (Z-2,A) \ ,$$ where two orbital electrons are captured simultaneously. This process is expected to have a longer life-time and accompanied by a photon that carries away the decay energy. However, a possible enhancement of the capture rate by a factor as large as $10^{6}$ can occur if the masses of the initial and final (excited) nucleus are degenerate [@Bernabeu:1983yb], and hence this nuclear decay process is also attracting attention both theoretically [@Sujkowski:2003mb; @Frekers:2005ze; @Krivoruchenko:2010ng; @Kotila:2014zya] and experimentally [@Barabash:2006qx; @Barabash:2009ja; @Belli:2013qja; @Belli:2014map; @Gavrilyuk:2013yqa]. Moreover, neutrinoless positron-emitting electron capture ($0\nu \beta^{+}$EC) and neutrinoless double beta plus decay ($0\nu \beta^{+} \beta^{+}$) may occur in the same nucleus depending on the mass difference between the initial and final nuclei. Detection of these nuclear decay modes could help to determine the effective neutrino mass and parameters of a possible right-handed weak current [@Haxton:1985am; @Hirsch:1994es].
On the other hand, two-neutrino double beta decay (2$\nu \beta^{-} \beta^{-}$) and two-neutrino double electron capture (2$\nu$ECEC) processes are allowed within the standard model. Although 2$\nu \beta^{-} \beta^{-}$ has been observed in more than ten isotopes, there exist only a few positive experimental results for 2$\nu$ECEC so far: a geochemical measurement for $^{130}$Ba with a half-life of $(2.2\pm 0.5)\times 10^{21}$ years [@Meshik:2001ra] and a direct measurement for $^{78}$Kr with a half-life of $(9.2^{+5.5}_{-2.6}(stat)\pm 1.3 (sys))\times 10^{21}$ years [@Gavrilyuk:2013yqa]. In the case that after the 2$\nu$ECEC process the nucleus is in the ground state, the observable energy comes from atomic de-excitation and nuclear recoil; depending on the nucleus, the energy deposited by nuclear recoil may become negligible, leading to a well defined energy deposit dominated by the atomic de-excitation - a line spectrum. Nevertheless, little attention has been paid to direct detection of this process because of difficulties due to small natural abundance and the energy threshold of large volume detectors. Any measurement of 2$\nu$ECEC will provide a new reference for the calculation of nuclear matrix elements from the proton-rich side of the mass parabola of even-even isobars [@Suhonen:1998ck]. Although the matrix element for the two-neutrino mode is different from that for the neutrinoless mode, it gives constraints on the relevant parameters within a chosen model [@Bilenky:2014uka].
The XMASS detector uses liquid xenon in its natural isotopic abundance as its active target material. Among others it contains the double electron capture nuclei $^{124}$Xe (0.095%) and $^{126}$Xe (0.089%), as well as the double beta decay nuclei $^{136}$Xe (8.9%) and $^{134}$Xe (10.4%). It has been pointed out that large volume dark matter detectors with natural xenon as targets have the potential to measure the 2$\nu$ECEC on $^{124}$Xe [@Mei:2013cla; @Barros:2014exa]. Among the different models for calculating the corresponding nuclear matrix element, there exists a wide spread of calculated half-lives for this process: between $10^{20}$ and $10^{24}$ years as summarized in Table \[table:calculated\_halflives\].
Model $T_{1/2}^{2\nu{\rm ECEC}}$ ($\times 10^{21}$ yr) Reference
----------------------- -------------------------------------------------- -----------------------
QRPA 0.4-8.8 [@Suhonen:2013rca]
QRPA 2.9-7.3 [@Hirsch:1994es]
SU(4)$_{\sigma \tau}$ 7.0-18 [@Rumyantsev:1998uy]
PHFB 7.1-18 [@Singh:2007jh]
PHFB 61-160 [@Shukla:2007ju]
MCM 390-980 [@Aunola:1996ui]
: Calculated half-lives for 2$\nu$ECEC on $^{124}$Xe. The lower and upper values are calculated for the axial-vector coupling constant $g_A=1.26$ and 1.0, respectively.[]{data-label="table:calculated_halflives"}
A previous experiment used enriched xenon. A gas proportional counter containing 58.6 g of $^{124}$Xe (enriched to 23%) was looking for the simultaneous capture of two $K$-shell electrons on that isotope, and published the latest lower bound on the half-life T$_{1/2}^{2\nu2K}$ as $2.0\times 10^{21}$ years [@Gavrilyuk:2014dqa; @Gavrilyuk:2015ada].
In this paper, we present the result from a search for 2$\nu$ECEC on $^{124}$Xe using the XMASS-I liquid xenon detector.
The XMASS-I Detector
====================
XMASS-I is a large single phase liquid xenon detector [@Abe:2013tc] located underground (2700m water equivalent) at the Kamioka Observatory in Japan. An active target of 835kg of liquid xenon is held inside of a pentakis-dodecahedral copper structure that holds 642 inward-looking photomultiplier tubes (PMTs) on its approximately spherical inner surface. The detector is calibrated regularly with $^{57}$Co and $^{241}$Am sources [@Kim:2015rsa] inserted along the central vertical axis of the detector. Measuring with the $^{57}$Co source from the center of the detector volume the photoelectron yield is determined to be 13.9 photoelectrons (PEs)/keV [@FN1]. This large photoelectron yield is realized by a large inner surface photocathode coverage of $>$62% and the large PMT quantum efficiency of approximately 30%. The non-linear response in scintillation light yield for electron-mediated events in the detector was calibrated with $^{55}$Fe, $^{57}$Co, $^{109}$Cd, and $^{241}$Am sources. When a PMT signal exceeds the discriminator threshold equivalent to 0.2 PE, a “hit” is registered on the channel. Data acquisition is triggered if ten or more hits are asserted within 200ns. Each PMT signal is digitized with charge and timing resolution of 0.05PE and 0.4ns, respectively [@Fukuda:2002uc]. The liquid xenon detector is located at the center of a cylindrical water Cherenkov veto counter and shield, which is 11m high with a 10m diameter. The veto counter is equipped with 72 20-inch PMTs. Data acquisition for the veto counter is triggered if eight or more of its PMTs register a signal within 200ns. XMASS-I is the first direct detection dark matter experiment equipped with such an active water Cherenkov shield.
Expected Signal and Detector Simulation
=======================================
The process of 2$\nu$ECEC on $^{124}$Xe is $$^{124}{\rm Xe} + 2e^- \to ^{124}{\rm Te} + 2\nu_e$$ with a $Q$-value of 2864 keV. In the case that two $K$-shell electrons in the $^{124}$Xe atom are captured simultaneously, a daughter atom of $^{124}$Te is formed with two vacancies in the $K$-shell and de-excites by emitting atomic X-rays and/or Auger electrons. The total energy deposition in the detector is $2K_{b} = 63.63$ keV, where $K_{b}$ is the binding energy of a $K$-shell electron in a tellurium atom. The energy deposition from the recoil of the daughter nucleus is $\sim$30 eV at most, which is negligible. Although $^{126}$Xe can also undergo 2$\nu$ECEC, this reaction is expected to be much slower than that on $^{124}$Xe since its $Q$-value of 920 keV is smaller. The $Q$-values are taken from the AME2012 atomic mass evaluation [@AME2012].
The Monte Carlo (MC) generation of the atomic de-excitation signal is based on the atomic relaxation package in Geant4 [@2007ITNS...54..585G]. While the X-ray and Auger electron tables refer to emission from singly charged ions, 2$\nu$ECEC produces a doubly charged ion. The energy of the double-electron holes in the $K$-shell of $^{124}$Te is calculated to be 64.46 keV [@Nesterenko:2012xp], which is only 0.8 keV different from the sum of the $K$-shell binding energy of the singly charged ion. Therefore, this difference is negligible in this analysis. Simulated de-excitation events are generated uniformly throughout the detector volume. The MC simulation includes the nonlinearity of the scintillation response [@Abe:2013tc] as well as corrections derived from detector calibrations. The absolute energy scale of the MC is adjusted at 122keV. The systematic difference of the energy scale between data and MC due to imperfect modeling of the nonlinearity in MC is estimated as 3.5% by comparing $^{241}$Am data to MC. The decay constants of scintillation light and the timing response of the PMTs are modeled to reproduce the time distribution observed with the $^{57}$Co (122keV) and $^{241}$Am (60keV) gamma ray sources [@Uchida:2014cnn]. The group velocity of the scintillation light in liquid xenon is calculated from its refractive index ($\sim$11cm/ns for 175nm) [@LXeSpeed].
Data Sample and Event Selection
===============================
The data used in the present analysis were collected between December 24, 2010 and May 10, 2012. Since we took extensive calibration data and various special runs by changing the detector conditions to understand the general detector response and the background, we select periods of operation under what we call normal data taking conditions with a stable temperature ($174\pm 1.2$ K) and pressure (0.160-0.164 MPa absolute). After furthermore removing periods of operation with excessive PMT noise, unstable pedestal levels, or abnormal trigger rates, the total livetime becomes 165.9 days.
Event selection proceeds in four stages: pre-selection, fiducial volume cut, timing balance cut, and band-like pattern cut. The pre-selection requires that no outer detector trigger is associated with the event, that the event is separated in time from the nearest event by at least 10 ms, and that the RMS spread of the inner detector hit timings of the event is less than 100 ns. This pre-selection reduces the total effective lifetime to 132.0 days in the final sample.
In order to select events occurring in the fiducial volume, an event vertex is reconstructed based on a maximum likelihood evaluation of the observed light distribution in the detector [@Abe:2013tc]. We select events satisfying that the radial distance of their reconstructed vertex from the center of the detector is smaller than 15 cm. The fiducial mass of natural xenon in that volume is 41 kg, containing 39 g of $^{124}$Xe.
During the data-taking period, a major background in the relevant energy range comes from radioactive contaminants in the aluminum seal of the PMTs. These background events often occur at a blind corner of the nearest PMT and are mis-reconstructed in the inner volume of the detector. The remaining two cuts deal with these mis-reconstructed events. The timing balance cut uses the time difference between the first hit in an event and the mean of the timings of the second half of all the time-ordered hits in the event. Events with smaller time difference are less likely to be events from the detector’s inner surface that were wrongly reconstructed and are kept. The timing balance cut reduces the data by a factor of 5.9 in the signal energy window defined later, while it keeps 80% of signal events remaining after the fiducial volume cut. The band-like pattern cut eliminates events that reflect their origin within grooves or crevices in the inner detector surface through a particular illumination pattern: The rims of the groove or crevice act as an aperture that is projected as a “band” of higher photon counts onto the inner detector surface. This band is characterized by the ratio of the maximum PEs in the band of width 15 cm to the total PEs in the event [@Uchida:2014cnn]. Events with smaller ratio are less likely to originate from crevices and are selected. The band pattern cut reduces the data by a factor of 24.6 while it keeps 70% of signal events remaining after the fiducial volume and timing balance cuts.
![Energy spectra of the simulated events after each reduction step. From top to bottom, the simulated energy spectrum after pre-selection and radius cut (black solid), timing balance cut (red dashed), and band-like pattern cut (blue filled) are shown. The vertical dashed lines indicate the 56-72 keV signal window.[]{data-label="fig:mc-reduction"}](Fig1.eps){height="60mm"}
The fiducial volume, timing balance, and band pattern cut values are optimized to maximize sensitivity to a monoenergetic peak in the 60 keV region. For the fiducial volume cut, the range of the cut value was restricted in the optimization process to be larger than 15 cm in order to avoid too small of an acceptance, and this restriction turns out to determine the optimal value [@Abe:2014zcd].
In the present analysis, the total energy deposition of events is reconstructed from the observed number of photoelectrons correcting for the non-linear response of scintillation light yield. The correction is performed assuming the light originates from two X-rays with equal energy. Finally, the signal window is defined such that it contains 90% of the simulated signal with equal 5% tails to either side after all the above were applied, which results in a 56$-$72 keV window. Fig. \[fig:mc-reduction\] shows energy spectra of the simulated events after each reduction step. From the simulation, signal detection efficiency is estimated to be 59.7%.
Results and Discussion
======================
![Energy spectra of the observed events after each reduction step for the 165.9 days of data. From top to bottom, the observed energy spectrum after pre-selection and radius cut (black solid), timing balance cut (red dashed), and band-like pattern cut (blue filled) are shown. The vertical dashed lines indicate the 56-72 keV signal window. The expected $^{214}$Pb background (green hatched) together with the signal expectation for the 90% confidence level upper limit (magenta hatched) are also shown.[]{data-label="fig:data-reduction"}](Fig2.eps){height="60mm"}
Fig. \[fig:data-reduction\] shows energy distributions of data events remaining after each reduction step. After all cuts, 5 events are left in the signal region but no significant peak is seen. The main contribution to the remaining background in this energy regime is the $^{222}$Rn daughter $^{214}$Pb in the detector. The amount of $^{222}$Rn was estimated to be $8.2\pm 0.5$ mBq from the observed rate of $^{214}$Bi-$^{214}$Po consecutive decays. Given the measured decay rate the expected number of background events in the signal region from this decay alone is estimated to be $5.3\pm 0.5$ events. The concentration of krypton in the xenon was measured to be $<$2.7 ppt [@Abe:2013tc], and thus background from $^{85}$Kr is negligible in this analysis. The background from $2\nu \beta^{-} \beta^{-}$ of $^{136}$Xe ($T_{1/2}=2\times 10^{21}$ years [@Agashe:2014kda]) is smaller than the $^{214}$Pb background by a factor of 7 and is negligible for this analysis.
Fig. \[fig:data-bg-overlay\] shows the energy distribution of the observed events overlaid with the $^{214}$Pb background simulation after all cuts except for the energy window cut. The energy spectrum after cuts in data is consistent with the expected $^{214}$Pb background spectrum. Although the total number of observed events is 26% larger than that of the expected $^{214}$Pb background in the energy range between 24 keV and 136 keV but outside the signal window, the tension is still at a 1.4$\sigma$ level with this small statistics. Note that an excess in the highest energy bin is due to a gamma-ray from $^{\rm 131m}$Xe in liquid xenon, and thus this energy bin is not included in the calculation. We derive a conservative limit under the assumption of the $^{214}$Pb background constrained by the $^{214}$Bi-$^{214}$Po measurement.
![Energy distribution of the observed events (black) overlaid with the $^{214}$Pb background simulation (green) after all cuts except for the energy window cut. The vertical dashed lines indicate the 56-72 keV signal window.[]{data-label="fig:data-bg-overlay"}](Fig3.eps){height="60mm"}
A lower limit on the 2$\nu$ECEC half-life is derived using the following Bayesian method that also accounts for systematic uncertainties to calculate the conditional probability distribution for the decay rate as follows: $$\begin{aligned}
P (\Gamma| n_{\rm obs}) &=& \iiiint \frac{{\rm e}^{-(\Gamma \lambda \epsilon +b) (1+\delta)}
((\Gamma \lambda \epsilon +b) (1+\delta))^{n_{\rm obs}}}{n_{\rm obs}!} \nonumber \\
& & \times P(\Gamma) P(\lambda) P(\epsilon) P(b) P(\delta)
{\rm d} \lambda {\rm d} \epsilon {\rm d} b {\rm d} \delta\end{aligned}$$ where $\Gamma$ is the decay rate, $n_{\rm obs}$ is the observed number of events, $\lambda$ is the detector exposure including the abundance of $^{124}$Xe, $\epsilon$ is the detection efficiency, $b$ is the expected number of background events, and $\delta$ is a parameter representing the systematic uncertainty in the event selection which affects both signal and background. The decay rate prior probability $P(\Gamma)$ is 1 for $\Gamma \ge 0$ and otherwise 0. The prior probability distributions incorporating systematic uncertainties in the detector exposure $P(\lambda)$, detection efficiency $P(\epsilon)$, background $P(b)$, and event selection $P(\delta)$ are assumed to be the split normal distribution centered at the nominal value with two standard deviations since some error sources are found to have a different impact on the positive versus the negative side of the distribution center as described below.
Table \[table:systematics\] summarizes the systematic uncertainties in exposure, detection efficiency, and event selection. The systematic uncertainty in the detector exposure is dominated by the uncertainty in the abundance of $^{124}$Xe in the xenon. A sample was taken from the detector and its isotope composition was measured at the Geochemical Research Center, the University of Tokyo using a modified VG5400/MS-III mass spectrometer [@Bajo:2011]. The result is consistent with that of natural xenon in air, and we treat the uncertainty in that measurement as a systematic error. The systematic uncertainty in the detection efficiency is estimated from comparisons between data and MC simulation for $^{241}$Am (60 keV $\gamma$-ray) calibration data at various positions within the fiducial volume. The systematic uncertainty in the energy scale is evaluated to be $\pm 5\%$, summing up in quadrature the uncertainties from the nonlinearity of the scintillation yield ($\pm 3.5\%$), position dependence ($\pm 2\%$), and time variation ($\pm 3\%$). Changing the number of photons generated per unit energy deposited in the simulation by this amount, the signal efficiency changes by $\pm ^{0}_{8.6}\%$. Since we apply the energy cut on lower and upper sides, both increasing and decreasing number of photons in MC makes signal efficiency smaller. The energy resolution in the calibration data is found to be 12% worse than that in the simulation. The uncertainty due to this difference is evaluated by worsening energy resolution in the simulation, which leads to a $5.3\%$ reduction in signal efficiency.
The uncertainty in modeling the scintillation decay constant as a function of energy is evaluated to be $\pm ^{1.5}_{0}$ ns, resulting in an uncertainty in the signal efficiency of $\pm ^{0}_{7.1}\%$. The radial position of the reconstructed vertex for the calibration data differs from the true source position by 5 mm, which causes a $6.7\%$ reduction in efficiency. For the timing balance and band-like pattern cuts, we evaluate the impact on the signal efficiency by again taking the difference of their acceptance for calibration data and the respective simulation. The resulting change in signal efficiency is $\pm ^{3.0}_{0}\%$ for the timing balance cut, and $\pm 5.0\%$ for the band-like pattern cut.
----------------- -------------------------- ------------------
Item Error source Fractional
uncertainty (%)
Exposure Abundance of $^{124}$Xe $\pm 8.5$
Liquid xenon density $\pm 0.5$
Efficiency Energy scale $\pm ^{0}_{8.6}$
Energy resolution $\pm ^{0}_{5.3}$
Scintillation decay time $\pm ^{0}_{7.1}$
Event selection Fiducial volume cut $\pm ^{0}_{6.7}$
Timing balance cut $\pm ^{3.0}_{0}$
Band-like pattern cut $\pm 5.0$
----------------- -------------------------- ------------------
: Summary of systematic uncertainties in exposure, detection efficiency, and event selection.[]{data-label="table:systematics"}
Finally, we calculate the 90% confidence level (CL) limit using the relation $$\frac{\int_{0}^{\Gamma_{\rm limit}} P(\Gamma|n_{\rm obs}) {\rm d} \Gamma}
{\int_{0}^{\infty} P(\Gamma|n_{\rm obs}) {\rm d} \Gamma} = 0.9$$ to obtain $$T_{1/2}^{2\nu 2K} \left (^{124}{\rm Xe} \right ) = \frac{\ln 2}{\Gamma_{\rm limit}} > 4.7 \times 10^{21} \ {\rm years}.$$ Note that the total systematic uncertainty worsens the obtained limit by 20%.
In addition, the fact that we do not observe significant excess above background allows us to give a constraint on 2$\nu$ECEC on $^{126}$Xe in the same manner. The fiducial volume contains 36 g of $^{126}$Xe and the uncertainty in the abundance of $^{126}$Xe is estimated to be 12.1%, and we obtain $T_{1/2}^{2\nu 2K} \left (^{126}{\rm Xe} \right ) > 4.3 \times 10^{21} \ {\rm years}$ at 90% CL.
The XMASS project uses a single phase liquid xenon detector with a natural abundance target. This straightforward technology offers easy scalability to larger detectors. The future XMASS-II detector will contain 10 tons of liquid xenon in its fiducial volume as target, and the expected sensitivity of XMASS-II will improve by more than two orders of magnitude over the current limit after 5 years, assuming a background level of $3\times 10^{-5}$ day$^{-1}$kg$^{-1}$keV$^{-1}$. This background is due to 2$\nu \beta^{-} \beta^{-}$ of $^{136}$Xe and $pp$+$^{7}$Be solar neutrinos.
Conclusions
===========
In conclusion, we have searched for 2$\nu$ECEC on $^{124}$Xe using an effective live time of 132.0 days of XMASS-I data in a fiducial volume containing 39 g of $^{124}$Xe. No significant excess over the expected background is found in the signal region, and we set a lower limit on its half-life of $4.7 \times 10^{21}$ years at 90% CL. The obtained limit has ruled out parts of some theoretical expectations. In addition, we obtain a lower limit on the $^{126}$Xe 2$\nu$ECEC half-life of $4.3 \times 10^{21}$ years at 90% CL. A future detector with XMASS-II characteristics establishes a path toward covering the whole range of half-lives obtained in the model calculations cited in introduction.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Prof. Nagao and his group for the measurement of the isotope composition of the xenon used in the XMASS-I detector. We gratefully acknowledge the cooperation of the Kamioka Mining and Smelting Company. This work was supported by the Japanese Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Scientific Research, JSPS KAKENHI Grant No. 19GS0204 and 26104004, and partially by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2011-220-C00006).
References {#references .unnumbered}
==========
[00]{} W. Buchmuller, R. D. Peccei, T. Yanagida, Ann. Rev. Nucl. Part. Sci. [**55**]{} (2005) 311.
J. Bernabeu, A. De Rujula, C. Jarlskog, Nucl. Phys. B [**223**]{} (1983) 15.
Z. Sujkowski, S. Wycech, Phys. Rev. C [**70**]{} (2004) 052501.
D. Frekers, hep-ex/0506002.
M. I. Krivoruchenko [*et al.*]{}, Nucl. Phys. A [**859**]{} (2011) 140.
J. Kotila, J. Barea, F. Iachello, Phys. Rev. C [**89**]{} (2014) 064319.
A. S. Barabash [*et al.*]{}, Nucl. Phys. A [**785**]{} (2007) 371.
A. S. Barabash [*et al.*]{}, Phys. Rev. C [**80**]{} (2009) 035501.
P. Belli [*et al.*]{}, Phys. Rev. C [**87**]{} (2013) 034607.
P. Belli [*et al.*]{}, Nucl. Phys. A [**930**]{} (2014) 195.
Y. M. Gavrilyuk [*et al.*]{}, Phys. Rev. C [**87**]{} (2013) 035501.
W. C. Haxton, G. J. Stephenson, Prog. Part. Nucl. Phys. [**12**]{} (1984) 409.
M. Hirsch [*et al.*]{}, Z. Phys. A [**347**]{} (1994) 151.
A. P. Meshik [*et al.*]{}, Phys. Rev. C [**64**]{} (2001) 035205.
J. Suhonen, O. Civitarese, Phys. Rept. [**300**]{} (1998) 123.
S. M. Bilenky, C. Giunti, Int. J. Mod. Phys. A [**30**]{} (2015) 1530001.
D.-M. Mei [*et al.*]{}, Phys. Rev. C [**89**]{} (2014) 014608.
N. Barros, J. Thurn, K. Zuber, J. Phys. G [**41**]{} (2014) 115105.
J. Suhonen, J. Phys. G [**40**]{} (2013) 075102.
O. A. Rumyantsev, M. H. Urin, Phys. Lett. B [**443**]{} (1998) 51.
S. Singh [*et al.*]{}, Eur. Phys. J. A [**33**]{} (2007) 375.
A. Shukla, P. K. Raina, P. K. Rath, J. Phys. G [**33**]{} (2007) 549.
M. Aunola, J. Suhonen, Nucl. Phys. A [**602**]{} (1996) 133.
Y. M. Gavrilyuk [*et al.*]{}, Phys. Part. Nucl. [**46**]{} (2015) 147.
Y. M. Gavrilyuk [*et al.*]{}, arXiv:1507.04520 \[nucl-ex\].
K. Abe [*et al.*]{}, XMASS Collaboration, Nucl. Instrum. Meth. A [**716**]{} (2013) 78.
N. Y. Kim [*et al.*]{}, XMASS Collaboration, Nucl. Instrum. Meth. A [**784**]{} (2015) 499.
This photoelectron yield is smaller than the value reported in Ref. [@Abe:2013tc; @Abe:2012az; @Abe:2012ut] since we changed a correction on the charge observed in the electronics. This correction is within the uncertainty reported earlier, $\pm1.2$PE/keV.
K. Abe [*et al.*]{}, XMASS Collaboration, Phys. Lett. B [**719**]{} (2013) 78.
K. Abe [*et al.*]{}, XMASS Collaboration, Phys. Lett. B [**724**]{} (2013) 46.
Y. Fukuda [*et al.*]{}, Super-Kamiokande Collaboration, Nucl. Instrum. Meth. A [**501**]{} (2003) 418.
M. Wang [*et al.*]{}, Chin. Phys. C [**36**]{} (2012) 1603.
S. Guatelli [*et al.*]{}, IEEE Trans. Nucl. Sci. [**54**]{} (2007) 585;\
S. Guatelli [*et al.*]{}, IEEE Trans. Nucl. Sci. [**54**]{} (2007) 594.
D. A. Nesterenko [*et al.*]{}, Phys. Rev. C [**86**]{} (2012) 044313.
H. Uchida [*et al.*]{}, XMASS Collaboration, Prog. Theor. Exp. Phys. (2014) 063C01.
S. Nakamura [*et al.*]{}, in Proceedings of the Workshop on Ionization and Scintillation Counters and Their Uses, vol. [**27,**]{} 2007.
K. Abe [*et al.*]{}, XMASS Collaboration, Phys. Rev. Lett. [**113**]{} (2014) 121301.
K. A. Olive [*et al.*]{}, Particle Data Group, Chin. Phys. C [**38**]{} (2014) 090001.
K.-I. Bajo [*et al.*]{}, Earth Planets Space [**63**]{} (2011) 1097.
|
---
abstract: |
A primary interest in dynamic inverse problems is to identify the underlying temporal behaviour of the system from outside measurements. In this work we consider the case, where the target can be represented by a decomposition of spatial and temporal basis functions and hence can be efficiently represented by a low-rank decomposition. We then propose a joint reconstruction and low-rank decomposition method based on the Nonnegative Matrix Factorisation to obtain the unknown from highly undersampled dynamic measurement data. The proposed framework allows for flexible incorporation of separate regularisers for spatial and temporal features. For the special case of a stationary operator, we can effectively use the decomposition to reduce the computational complexity and obtain a substantial speed-up. The proposed methods are evaluated for two simulated phantoms and we compare the obtained results to a separate low-rank reconstruction and subsequent decomposition approach based on the widely used principal component analysis.
**Keywords:** Nonnegative matrix factorisation, dynamic inverse problems, low-rank decomposition, variational methods\
**AMS Subject Classification:** 15A69, 15A23, 65K10
author:
- 'Simon Arridge[^1]'
- 'Pascal Fernsel[^2]'
- 'Andreas Hauptmann[^3]'
bibliography:
- 'references/references.bib'
- 'references/Inverse\_problems\_references\_2018.bib'
- 'references/dyntomo\_refs.bib'
- 'references/lowrank\_refs.bib'
title: 'Joint Reconstruction and Low-Rank Decomposition for Dynamic Inverse Problems'
---
Introduction {#sec:Introduction}
============
Several inverse problems are concerned with reconstruction of solutions in multiple physical dimensions such as space, time and frequency. Generally, such problems require very large datasets in order to satisfy conditions for accurate reconstruction, whereas in practice only subsets of such complete data can be measured. Furthermore, the information content of the solutions from such reduced data may be much less than suggested by the complete set. In these cases, regularisation in the reconstruction process is required to compensate for the reduced information content, for instance by correlating features between auxiliary physical dimensions.
For instance, dynamic inverse problems have gained considerable interest in recent years. This development is partly driven by the increase in computational resources and the possibility to handle large data size more efficiently, but also novel and more efficient imaging devices enabling wide areas of applications in medicine and industrial imaging. For instance in medical imaging, dynamic information is essential for accurate diagnosis of heart diseases or for applications in angiography to determine blood flow by injecting a contrast agent to the patient’s blood stream. But also in nondestructive testing and chemical engineering, tomographic imaging has become increasingly popular to monitor dynamic processes. The underlying problem in these imaging scenarios is often, that a fine temporal sampling, i.e. in the discrete setting a large number of channels, is only possible under considerable restrictions to sampling density at each time instance. This limitation often renders time-discrete (static) reconstructions insufficient. Additionally, an underlying problem in many dynamic applications is given by the specific temporal dynamics of the process, which are often non-periodic and hence prevents temporal binning approaches. Thus, it is essential to include the dynamic nature of the imaging task in the reconstruction process.
With increasing computational resources, it has become more feasible to address the reconstruction task as a fully dynamic problem in a spatio-temporal setting. In these approaches it is essential to include the dynamic information in some form into the reconstruction task. This could be done for instance by including a regularisation on the temporal behaviour as penalty in a variational setting [@Schmitt2002; @Schmitt2002a]. Such approaches have been used in a wide variety of applications, such as magnetic resonance imaging [@feng2014golden; @lustig2006kt; @steeden2018real], X-ray tomography [@bubba2017shearlet; @niemi2015dynamic] and applications to process monitoring with electrical resistance tomography [@chen2018extended]. More advanced approaches aim to include a physical motion model and estimate the motion of the target from the measurements itself. This can be done for instance by incorporating an image registration step into the reconstruction algorithm and reformulate the reconstruction problem as a joint motion-estimation and reconstruction task [@burger2017variational; @burger2018variational; @djurabekova2019application; @lucka2018enhancing]. Another possibility is the incorporation of an explicit motion model by methamorphsis as considered in [@Chen:2018aa; @Gris:2019aa].
In this work we consider another possibility to incorporate regularisation, and in particular temporal regularity, to the reconstruction task by assuming a low-dimensional representation of the unknown. This leads naturally to a low-rank description of the underlying inverse problem and is especially suitable to reduce data size in cases where we have much fewer basis functions to represent the unknown than the temporal sampling. In a continuous setting, this yields the analysis of low-rank approximations in tensor product of Hilbert spaces, for which we refer the reader to [@KressnerUschmajew16; @Uschmajew13DoctoralThesis]. We rather focus on low-rank approximation methods in a discretised framework, which leads to the use of specific matrix factorisation approaches and their optimisation techniques. In particular, in this work we propose a joint reconstruction and decomposition in a variational framework using non-negative matrix factorisation, which naturally represents the physical assumption of nonnegativity of the dynamic target and allows for a variety of regularising terms on spatial and temporal basis functions. Following this framework, we propose two algorithms, that either jointly recover the reconstruction and the low-rank decomposition, or alternatively recovers only the low-rank representation of the unknown without the need to construct the full spatio-temporal target in the reconstruction process. Here, the second approach effectively incorporates the dimension reduction and can lead under certain assumptions on the forward operator to a significant reduction in computational complexity. This can be particularly useful, if one is only interested in the dynamics of the system and not the full reconstruction.
This paper is organised as follows. In Section \[sec:Framework\] we discuss our setting for dynamic inverse problems and continue to discuss low-rank decomposition approaches. Specifically, principal component analysis (PCA) and non-negative matrix factorisation (NMF), which is the focus in this study. As a baseline we first present a low-rank reconstruction method followed by either of the decomposition methods. We then continue to present the proposed framework of joint reconstruction and decomposition with the NMF. In particular, we prove that the proposed framework leads to a monotonic decrease of the cost functions. We then proceed in Section \[sec:Application to Dynamic CT\] to evaluate the algorithms under considerations with the use case of dynamic X-ray tomography and two simulated phantoms with different characteristics. We conclude the study in Section \[sec:Conclusion and Outlook\] with some thoughts on the extension of the proposed framework.
Reconstruction and Low-rank Decomposition Methods {#sec:Framework}
=================================================
A Setting for Dynamic Inverse Problems {#subsec:Multi--Channel Inverse Problem}
--------------------------------------
In this work, we consider a general multi-dimensional inverse problem, where the unknown $x(s,\tau)$ is defined on a spatial domain $\Omega_1 \subset {\mathbb{R}}^{d_1}$ with dependence on a secondary variable $t\in {\mathbb{R}}_{\geq 0}$ defined in a bounded interval $\mathcal{T}:=[0,T]$. This setting admits some quite general applications where the secondary variable could have other physical interpretations, notably wavelength for hyper-spectral problems; however, to fix our ideas, we henceforth consider $t$ to explicitly represent time, and our application to be that of *dynamic inverse problems*. Consequently, the underlying equation of the resulting inverse problem can be described in the following form $$\label{eq:dynamic IP General Continuous}
\mathcal{A}(x(s,t);t) = y(\sigma,t) \quad \text{for } t \in \mathcal{T},$$ where $\mathcal{A}:\Omega_1\times \mathcal{T}\to \Omega_2\times \mathcal{T}$ is a time-dependent linear bounded operator between suitable Hilbert spaces and $\Omega_2\subset{\mathbb{R}}^{d_2}$ is the domain for the measurement data. We will primarily consider the non-stationary case here, where the forward operator $\mathcal{A}$ is dependent on $t$. In the special case of a stationary operator $\mathcal{A}(\cdot;t) \equiv \mathcal{A}$ for all $t\in \mathcal{T}$, where for each $t$ the operator follows the same sampling process, we can achieve possible computational improvements. The resulting implications will be discussed later in Section \[subsec:Complexity Reduction for Stationary Operator\].
Furthermore, the underlying assumption in this work is that the unknown $x:\Omega_1\times\mathcal{T}\to{\mathbb{R}}_{\geq 0 }$ can be decomposed into a set of spatial $b^k:\Omega_1\to {\mathbb{R}}_{\geq 0}$ and channel basis functions $c^k(t):\mathcal{T}\to{\mathbb{R}}_{\geq 0}$ for $1 \leq k\leq K$. Then the unknown can be represented by the decomposition $$\label{eqn:continuousDeompostion}
x(s,t) = \sum_{k=1}^K b^k(s) c^k(t).$$ This formulation naturally gives rise to the reconstruction and low-rank decompostion framework to extract the relevant features given by $b^k$ and $c^k$. An illustration for a possible phantom represented by is shown in Figure \[fig:dynSheppIllustration\].
We intentionally keep the formulation general here to allow for applications different to dynamic inverse problems, such as multi-spectral imaging. Nevertheless, the derived reconstruction and feature extraction framework in this paper will be used in Section \[sec:Application to Dynamic CT\] for the specific application to dynamic computed tomography.
![Illustration of a phantom that can be represented by the decomposition in . The phantom consists of $K=3$ components: the background and two dynamic components with periodically changing intensity (left and right plot). As such, this phantom can be efficiently represented by a low-rank decomposition considered in this study.[]{data-label="fig:dynSheppIllustration"}](figures/dynSheppFinalImage_blue_Bigger.png){width="\textwidth"}
Furthermore, a suitable discretisation of the continuous formulation is needed to introduce the feature extraction methods in the forthcoming sections. Let us first discretise the secondary variable, such that $t\in{\mathbb{N}}$ with $1\leq t \leq T$. For the spatial domain, we assume a vectorised representation such that the resulting unknown can be represented as a matrix $X\in{\mathbb{R}}^{N\times T}$, which leads to the matrix equation $$\label{eq:dynamic IP General Discrete}
A_t X_{\bullet, t} = Y_{\bullet, t} \quad \text{for} \quad 1\leq t\leq T,$$ where $A_t\in {\mathbb{R}}^{M\times N} $ is the discretised forward operator, $X_{\bullet, t}$ the $t$-th column of $X$ and $Y_{\bullet, t}$ the $t$-th column of the data matrix $Y\in {\mathbb{R}}^{M\times T}.$ Analogously, we will write $M_{n, \bullet}$ for the $n$-th row of an arbitrary matrix $M.$
Suitable restrictions to the matrices in Equation will be made in the following sections to ensure the applicability of the considered frameworks and, if possible, to properly represent the decomposition .
Feature Extraction Methods {#subsec:Feature Extraction Methods}
--------------------------
In this section, we introduce two feature extraction methods, namely the ** () and the ** (). These approaches are used to compute the latent components of the reconstruction $X.$ The will be used in Section \[subsec:Joint Reconstruction and low–rank Decomposition\] to introduce a joint reconstruction and low-rank decomposition framework to tackle the problem stated in .
### {#subsubsec: Principal Component Analysis}
Large and high dimensional datasets demand modern data analysis approaches to reduce the dimensionality and increase the interpretability of the data while keeping the loss of information as low as possible. Many different techniques have been developed for this purpose, but is one of the most widely used and goes back to [@pearson1901-PCA].
For a given matrix $X\in {\mathbb{R}}^{N\times T}$ with $N$ different observations of an experiment and $T$ features, the PCA is a linear orthogonal transformation given by the weights $C_{\tilde{k}, \bullet} = (C_{\tilde{k}1}, \dots, C_{\tilde{k}T})$ with $C \in {\mathbb{R}}^{\tilde{K}\times T},$ which transforms each observation $X_{n, \bullet}$ to *principal component scores* given by $B_{n\tilde{k}} \coloneqq \sum_t X_{nt} C_{\tilde{k}t} $ with $B=[B_{\bullet, 1}, \dots, B_{\bullet, \tilde{K}}]\in {\mathbb{R}}^{N\times \tilde{K}}$ and $\tilde{K}=\min(N-1, T),$ such that
- the sample variance $\operatorname{Var}(B_{\bullet, \tilde{k}})$ is maximised for all $\tilde{k},$
- each row $C_{\tilde{k}, \bullet}$ is constrained to be a unit vector
- and the sample covariance $\operatorname{cov}(B_{\bullet, k}, B_{\bullet, \tilde{k}}) = 0 $ for $k\neq \tilde{k}.$
Together with the usual assumption that the number of observations is higher than the underlying dimension, this leads to $\tilde{K}=T$ and the full transformation $B = XC^\intercal,$ where $C$ is an orthogonal matrix. The $t$-th column vector $(C_{t, \bullet})^\intercal$ defines the $t$-th *principal direction* and is an eigenvector of the covariance matrix $S=X^\intercal X/(N-1).$ The corresponding $t$-th largest eigenvalue of $S$ denotes the variance of the $t$-th principal component.
The above transformation is equivalent to the factorisation of the matrix $X$ given by $$\label{eq:PCA:Factorisation}
X = BC,$$ which allows to decompose each observation into the principal components, such that $X_{n, \bullet} = \sum_{t=1}^T B_{nt} C_{t, \bullet}.$ Therefore, we also have $X = \sum_{t=1}^{T} B_{\bullet, t} C_{t, \bullet}$ similarly to the continuous case in .
Furthermore, it is possible to obtain an approximation of the matrix $X$ by truncating the sum at the first $K<T$ principle components for all $n,$ which yields a rank $K$ matrix $X^{(K)}$ given by $$X^{(K)} = \sum_{k=1}^K B_{\bullet, k} C_{k, \bullet}.$$ Based on the Eckart–Young–Mirsky theorem [@GolubVanLoan13-EckardYoung], $X^{(K)}$ is the best rank $K$ approximation of $X$ in the sense that it minimises the discrepancy $\Vert X - X^{(K)}\Vert$ for both the Frobenius and spectral norm.
One typical approach to compute the is based on the of the data matrix $X=U\Sigma V^\intercal$ and will be used in this work. Setting $B\coloneqq U\Sigma$ and $C=V^\intercal$ gives already the desired factorisation in based on the .
### {#subsubsec:Nonnegative Matrix Factorisation}
, originally introduced as positive matrix factorisation by Paatero and Tapper in 1994 [@paatero94NMFIntro], is an established tool to obtain low-rank approximations of nonnegative data matrices. It has been widely used in the machine learning and data mining community for compression, basis learning, clustering and feature extraction for high-dimensional classification problems with applications in music analysis [@fevotte09NMFMusicAnalysis], document clustering [@Ding05K-Means-Equivalence] and medical imaging problems such as tumor typing in matrix-assisted laser desorption/ionisation (MALDI) imaging in the field of bioinformatics [@leuschner18].
Different from the approach above, the enforces nonnegativity constraints on the factor matrices without any orthogonality restrictions. This makes the the method of choice for application fields, where the underlying physical model enforces the solution to be nonnegative assuming that each datapoint can be described as a superposition of some unknown characteristic features of the dataset. The makes it possible to extract these features while constraining the matrix factors to have nonnegative entries, which simplifies their interpretation. These data assumptions are true for many application fields including the ones mentioned above but also especially our considered problem of dynamic computed tomography, where the measurements consist naturally of the nonnegative absorption of photons. Mathematically, the basic problem can be formulated as follows: For a given nonnegative matrix $X\in {\mathbb{R}}_{\geq 0}^{N\times T},$ find nonnegative matrices $B\in {\mathbb{R}}_{\geq 0}^{N\times K}$ and $C\in {\mathbb{R}}_{\geq 0}^{K\times T}$ with $K\ll \min\{N, T\},$ such that $$X\approx BC.$$ The factorisation allows to approximate the rows $X_{n, \bullet}$ and columns $X_{\bullet, t}$ of the data matrix as a superposition of the $K$ columns $B_{\bullet, k}$ of $B$ and rows $C_{k, \bullet}$ of $C$ respectively, such that $X_{n,\bullet} \approx \sum_{k=1}^{K} B_{nk}C_{k,\bullet} $ and $ X_{\bullet, t} \approx \sum_{k=1}^{K} C_{kt} B_{\bullet, k}.$ Similarly, it holds that $$X \approx BC = \sum_{k=1}^{K} B_{\bullet, k} C_{k, \bullet},$$ where the $K$ terms of the sum are rank-one matrices. Hence, the sets $\{ B_{\bullet, k} \}_k$ and $ \{ C_{k, \bullet} \}_k $ can be interpreted as a low-dimensional basis to approximate the data matrix, i.e. the performs the task of basis learning with additional nonnegativity constraints.
The usual approach to compute the factorisation is to define a suitable discrepancy term $\mathcal{D}_{\text{NMF}},$ which has to be chosen according to the noise assumption of the underlying problem, and to reformulate the as a minimisation problem. Typical discrepancies include the default case of the Frobenius Norm on which we will focus on, the Kullback-Leibler divergence, the Itakura-Saito distance or other generalized divergences [@cichocki09bookNMF].
Furthermore, problems are usually ill-posed due to the non-uniqueness of the solution [@Klingenberg09-NMFIllposedness] and require the application of suitable regularisation techniques. One common method is to include penalty terms in the minimisation problem to tackle the ill-posedness of the problem but also to enforce desirable properties of the factorisation matrices. Typical examples range from $\ell_1, \ell_2$ and total variation regularisation terms [@lecharlierDeMol13NMFTV] to more problem specific terms, which enforce additional orhogonality of the matrices or even allow supervised classification workflows if the is used as a prior feature exctraction method [@FM18; @leuschner18].
Hence, the general regularised problem can be written as $$\label{eq:NMF Minimisation Problem}
\min_{B, C\geq 0} \mathcal{D}_{\text{NMF}}(X, BC) + \sum_{\ell = 1}^{L} \gamma_\ell \mathcal{P}_\ell(B, C) \eqqcolon \min_{B, C\geq 0} \mathcal{F}(B, C),$$ where $\mathcal{P}_\ell$ denote the penalty terms, $\gamma_\ell\geq 0$ the corresponding regularisation parameters and $\mathcal{F}$ the cost function of the .
The considered optimisation approach in this work is based on the so-called Majorise-Minimisation principle and gives rise to multiplicative update rules of the matrices in , which automatically preserve the nonnegativity of the iterates provided that they are initialised nonnegative. For more details on this optimisation technique, we refer the reader to Appendix \[app:sec:Optimisation Techniques for NMF Problems\]. The idea of the feature extraction procedure based on the NMF can be well illustrated by considering the example from Figure \[fig:dynSheppIllustration\] that satisfies the decomposition assumption from . Here, the highlighted spatial regions change their intensities according to the given dynamics. The NMF allows a natural interpretation of the factorisation matrices $B$ and $C$ as the spatial and temporal basis functions for this case, as illustrated in Figure \[fig:NMFandCT\]. The column $X_{\bullet, t}$ of $X$ denotes the reconstruction of the $t$-th time step of the inverse problem in . The NMF allows to decompose the spatial and temporal features of the data: The matrix $B$ contains the spatial features in its columns with the corresponding temporal features in the rows of $C.$
![Structure of the NMF in the context of the dynamic Shepp-Logan phantom as shown in Figure \[fig:dynSheppIllustration\]. Here, the nonnegative spatial and temporal basis functions can be naturally represented by the matrices $B$ and $C$. []{data-label="fig:NMFandCT"}](figures/NMFStructureNew.png){width="\textwidth"}
Separated Reconstruction and Low-rank Decomposition {#subsec:Separated Reconstruction and Low--rank Decomposition}
---------------------------------------------------
Let us first discuss a separated reconstruction and feature extraction approach to solve the inverse problem in , that means we compute first a reconstruction and perform then subsequently the feature extraction with one of the previously discussed methods. We consider this method as baseline for our comparison.
The considered reconstruction method for this separated framework involves a basic gradient descent approach together with a regularisation step and a subsequent total variation denoising, which will be henceforth referred to as `gradTV`. The details on the algorithm are provided in Algorithm \[alg:gradTV\]. In particular, we aim to compute solutions to the least squares problem and incorporate the low-rank assumptions as additional penalty of the nuclear norm of $X_{\bullet, t}$, that is $$\min_{X_{\bullet, t}\geq 0} \| Y_{\bullet, t} - A_t X_{\bullet, t} \|^2_2 + \alpha \|X_{\bullet, t}\|_*$$ for all $t$; see e.g. [@Lingala2011; @Tremoulheac2014]. This can then be efficiently solved by a proximal gradient descent scheme with a soft-thresholding on the singular values and hence enforcing the low-rank structure. Ideally, one would like to include the total variation regularisation as penalty term, but as this tends to be computationally expensive for the fine temporal sampling, we included this as a subsequent denoiser.
In practice, after a suitable initialisation of the reconstruction matrix, the gradient descent step is computed with an, *a priori* defined, fixed stepsize $\rho_{\text{grad}}.$ For the proximal step, the truncated of $X$ is computed and a soft thresholding of the singular values is performed with a fixed threshold $\rho_{\text{thr}}.$ Afterwards, we enforce the nonnegativity with a projection step on the reconstruction $X.$ When the stopping criterion is satisfied, a TV denoising algorithm[^4] based on [@GoldsteinOsher-TVDenoiser; @Tremoulheac14PhDThesis] with the corresponding parameter $\rho_{\text{TV}}$ is applied.
**Initialise:** $X$ **Input:** $\rho_{\text{grad}}, \rho_{\text{thr}}, \rho_{\text{TV}} >0$ $X_{\bullet, t} \gets X_{\bullet, t} - \rho_{\text{grad}} (A_t^\intercal A_tX_{\bullet, t} - A_t^\intercal Y_{\bullet, t}) \ \ \ \text{for all}\ t$ $(U,\Sigma, V) \gets \textsc{SVD}(X)$ $\Sigma \gets \textsc{SoftThresh}_{\rho_{\text{thr}}}(\Sigma)$ $X \gets U\Sigma V^\intercal$ $X\gets \max(X, 0) $ $X \gets \textsc{TVDenoiser}_{\rho_{\text{TV}}}(X)$\
$X$
After the reconstruction procedure given by Algorithm \[alg:gradTV\], we perform the feature extraction of the reconstruction $X$ via both the and the and call the approach `gradTV_PCA` and `gradTV_NMF` respectively.
For `gradTV_PCA`, we simply compute the of $X$ based on its . Concerning the method `gradTV_NMF`, we consider the standard model $$\label{eq:NMF model gradTV_NMF}
\min_{B, C\geq 0} \Vert X - BC\Vert_F^2 + \dfrac{\tilde{\mu}_C}{2} \Vert C \Vert_F^2$$ with the parameter $\tilde{\mu}_C.$ The $\ell_2$ regularisaton penalty term on $C$ is motivated by our application in Section \[sec:Application to Dynamic CT\]. The corresponding multiplicative algorithms to solve are well-known [@demol12; @FM18] and a special case of the derived update rules in the next Section.
Joint Reconstruction and Low-rank Decomposition {#subsec:Joint Reconstruction and low--rank Decomposition}
-----------------------------------------------
Instead of the previously discussed separated reconstruction, we now aim to include the feature extraction into the reconstruction procedure. This gives rise to consider a joint reconstruction and low-rank decomposition approach based on the , rather than one based on a low-rank plus sparsity approach based on [@Candes2011; @Tao2013; @Yuan2013]. The basic idea of the method is to incorporate the reconstruction procedure of the inverse problem in into the NMF workflow. To do this, we have to additionally assume that $A_t\in {\mathbb{R}}_{\geq 0}^{M\times N}, Y\in {\mathbb{R}}_{\geq 0}^{M\times T}$ and $X\in {\mathbb{R}}_{\geq 0}^{N\times T}$ to ensure the desired nonnegativity of the factorisation matrices $B$ and $C,$ which corresponds to the assumptions of the decomposition in . The main motivation is that this joint approach allows the reconstruction process to exploit the underlying latent features of the dataset, which can therefore enhance the quality of the reconstructions by enabling regularisation of temporal and spatial features separately.
This can be achieved by including a discrepancy term $\mathcal{D}_{\text{IP}}(Y_{\bullet, t}, A_t X_{\bullet, t})$ of the inverse problem into the NMF cost function in . This leads together with some possible penalty terms for the reconstruction $X$ to the model $$\label{eq:NMF model BC-X General}
\min_{B, C, X\geq 0} \mathcal{D}_{\text{IP}}(Y_{\bullet, t}, A_t X_{\bullet, t}) + \alpha \mathcal{D}_{\text{NMF}}(X, BC) + \sum_{\ell = 1}^{L} \gamma_\ell \mathcal{P}_\ell(B, C, X),$$ with $\alpha\geq 0$ for the joint reconstruction and low-rank decomposition problem, which we will call `BC-X`. Furthermore, we can enforce $X\coloneqq BC$ as a hard constraint, such that the reconstruction matrix will have at most rank $K.$ In this case, the discrepancy $\mathcal{D}_{\text{NMF}}$ vanishes and we end up with the model `BC`: $$\label{eq:NMF model BC General}
\min_{B, C\geq 0} \mathcal{D}_{\text{IP}}(Y_{\bullet, t}, A_t (BC)_{\bullet, t}) + \sum_{\ell = 1}^{L} \gamma_\ell \mathcal{P}_\ell(B, C).$$
### Considered Models {#subsubsec:Considered NMF Models}
For both models and , we use the standard Frobenius norm for both the discrepancy terms $\mathcal{D}_{\text{NMF}}$ and $\mathcal{D}_{\text{IP}}.$ Furthermore, the optimisation method discussed in Section \[subsubsec:Algorithms\] allows to include a variety of penalty terms into the cost function. This makes it possible to construct suitable regularised models and to enforce additional properties to the matrices depending on the specific application. In this work, we will consider standard $\ell_1$ and $\ell_2$ regularisation terms on each matrix and an isotropic total variation penalty on the matrix $B.$ The latter is motivated by our considered application, which denoises the spatial features and thus also the reconstruction matrix. Hence, we will focus on the following NMF models in the remainder of this work: $$\tag*{\texttt{BC-X}} \label{eq:NMF model BC-X}
\begin{aligned}
&\scalebox{0.9}{\text{$\displaystyle\min_{B, C, X\geq 0} \Bigg\{ \sum_{t=1}^{T} \frac{1}{2} \Vert A_t X_{\bullet, t} - Y_{\bullet, t} \Vert_2^2 + \frac{\alpha}{2} \Vert BC - X \Vert_F^2 + \lambda_{B} \Vert B \Vert_1 + \frac{\mu_B}{2} \Vert B\Vert_F^2 $ }} \\
&\hspace{10ex}\scalebox{0.9}{\text{$\displaystyle + \lambda_{C} \Vert C \Vert_1 + \frac{\mu_C}{2} \Vert C\Vert_F^2 + \lambda_X \Vert X \Vert_1 + \frac{\mu_X}{2} \Vert X\Vert_F^2 + \dfrac{\tau}{2} \operatorname{TV}(B)\Bigg\}, $ }}
\end{aligned}$$ $$\tag*{\texttt{BC}} \label{eq:NMF model BC}
\begin{aligned}
&\min_{B, C \geq 0} \Bigg\{ \sum_{t=1}^{T} \frac{1}{2} \Vert A_t (BC)_{\bullet, t} - Y_{\bullet, t} \Vert_2^2 \!+\! \lambda_{C} \Vert C \Vert_1 + \frac{\mu_{C}}{2} \Vert C\Vert_F^2 + \lambda_{B} \Vert B \Vert_1\\
&\hspace*{9ex} + \frac{\mu_{B}}{2} \Vert B\Vert_F^2 + \dfrac{\tau}{2} \operatorname{TV}(B) \Bigg\}.
\end{aligned}$$ The regularisation parameters $ \alpha, \lambda_{C}, \mu_{C}, \lambda_{B}, \mu_{B}, \lambda_{X}, \mu_{X}, \tau \geq 0, $ are chosen *a priori*. Furthermore, $ \Vert \cdot \Vert_F $ denotes the Frobenius norm, $ \Vert M \Vert_1\coloneqq \sum_{ij} \vert M_{ij} \vert $ the 1-norm for matrices $ M $ and $ \operatorname{TV}(\cdot) $ is the following smoothed isotropic total variation [@defrise11TV; @FM18; @lecharlierDeMol13NMFTV].
\[def:TV\] The total variation of a matrix $ B\in \mathbb{R}^{N\times K} $ is defined as $$\operatorname{TV}(B) \coloneqq \sum_{k=1}^K \sum_{n=1}^N \vert \nabla_{nk}B\vert \coloneqq \sum_{k=1}^K \sum_{n=1}^N \sqrt{\varepsilon_{\operatorname{TV}}^2 + \sum_{\ell\in \mathcal{N}_n} (B_{nk}-B_{\ell k})^2},$$ where $ \varepsilon_{\operatorname{TV}}>0 $ is a small positive constant and $ \mathcal{N}_n $ are index sets referring to spatially neighboring pixels.
A typical example for the neighbourhood of the pixel $ (0,0) $ in two dimensions is $ \mathcal{N}_{(0,0)} = \{ (1,0), (0,1) \} $ to get an estimate of the gradient components in both directions of the axes. The parameter $ \varepsilon_{\operatorname{TV}} $ ensures the differentiability of the TV penalty term.
In the following section, we will present the multiplicative update rules for the NMF models \[eq:NMF model BC-X\] and \[eq:NMF model BC\] and derive the algorithms in Appendix \[app:sec:Derivation of the Algorithms\] based on the Majorise-Minimisation principle.
### Algorithms {#subsubsec:Algorithms}
In this section, we present in Theorem \[theorem:Algorithm BC-X\] and \[theorem:Algorithm BC\] the multiplicative algorithms for the NMF problems in \[eq:NMF model BC-X\] and \[eq:NMF model BC\]. As mentioned in Section \[subsubsec:Nonnegative Matrix Factorisation\], the multiplicative structure of the iteration scheme ensures automatically the nonnegativity of the matrices $B$ and $C$ as long as they are initialised nonnegative. The derivation of such algorithms in this work are based on the so-called Majorise-Minimisation principle. The main idea of this approach is to replace the considered NMF cost function $\mathcal{F}$ with a suitable auxiliary function $\mathcal{Q}_{\mathcal{F}}$, whose minimisation is much easier to handle and leads to a monotone decrease of $\mathcal{F}.$ Furthermore, specific construction techniques of these *surrogate functions* lead to the desired multiplicative update rules which fulfill the nonnegativity constraint. We provide a short description of the main principles in Appendix \[app:sec:Optimisation Techniques for NMF Problems\]. A more detailed discussion of different construction methods for various kinds of discrepancy and penalty terms of $\mathcal{F}$ can be found in the survey paper [@FM18].
For better readability, we present only the main results here and a detailed construction of the surrogate functions as well as derivation of the algorithms for both cost functions \[eq:NMF model BC-X\] and \[eq:NMF model BC\] can be found in Appendix \[app:sec:Derivation of the Algorithms\]. Consequently, we will only state the main results in Theorem \[theorem:Algorithm BC-X\] and \[theorem:Algorithm BC\] here. Nevertheless, due to the construction of a suitable surrogate function for the TV penalty term (see Appendix \[app:sec:Derivation of the Algorithms\] and [@FM18] for more details), we first introduce the following matrices $ P(B), Z(B)\in \mathbb{R}_{\geq 0}^{N\times K} $ as $$\begin{aligned}
P(B)_{n k} &\coloneqq \dfrac{1}{\vert \nabla_{nk} B \vert} \sum_{\ell \in \mathcal{N}_n} 1 + \sum_{\ell \in \bar{\mathcal{N}}_n} \dfrac{1}{\vert \nabla_{\ell k} B \vert}, \label{eq:TV:MatrixP}\\
Z(B)_{n k} &\coloneqq \dfrac{1}{P(B)_{n k}} \left ( \dfrac{1}{\vert \nabla_{nk} B \vert} \sum_{\ell \in {\mathcal{N}}_n} \dfrac{B_{n k} + B_{\ell k}}{2} + \sum_{\ell \in \bar{\mathcal{N}}_n} \dfrac{B_{n k} + B_{\ell k}}{2 \vert \nabla_{\ell k} B \vert} \right ), \label{eq:TV:MatrixZ}
\end{aligned}$$ where $ \bar{\mathcal{N}}_n $ is the set of the so-called *adjoint* neighbourhood pixels, which is given by the relation $$\ell\in \bar{\mathcal{N}}_n \Leftrightarrow n\in \mathcal{N}_\ell.$$ Furthermore, we write ${\bm{\mathit{1}}}_{M\times N}$ for an $M\times N$ matrix with ones in every entry.
We then obtain the two algorithms for both models under consideration. First for the \[eq:NMF model BC-X\] model that jointly obtains the reconstruction $X$ and the decomposition:
\[theorem:Algorithm BC-X\] For $A_t\in {\mathbb{R}}_{\geq 0}^{M\times N}, Y\in {\mathbb{R}}_{\geq 0}^{M\times T}$ and initialisations $ X^{[0]}\in \mathbb{R}_{> 0}^{N\times T}, B^{[0]}\in \mathbb{R}_{> 0}^{N\times K}, C^{[0]}\in \mathbb{R}_{> 0}^{K\times T}, $ the alternating update rules $$\begin{aligned}
X_{\bullet, t}^{[d+1]} &= X_{\bullet, t}^{[d]} \circ \dfrac{A_t^\intercal Y_{\bullet, t} + \alpha B^{[d]}C_{\bullet, t}^{[d]}}{A_t^\intercal A_tX_{\bullet, t}^{[d]} + (\mu_{X} + \alpha)X_{\bullet, t}^{[d]} + \lambda_{X} {\bm{\mathit{1}}}_{N\times 1}} \\
B^{[d+1]} &= B^{[d]} \circ \dfrac{\alpha X^{[d+1]}{C^{[d]}}^\intercal + \tau P(B^{[d]}) \circ Z(B^{[d]})}{\alpha B^{[d]}C^{[d]}{C^{[d]}}^\intercal + \mu_{B} B^{[d]} + \lambda_{B} {\bm{\mathit{1}}}_{N\times K} + \tau B^{[d]} \circ P(B^{[d]}) } \\
C^{[d+1]} &= C^{[d]} \circ \dfrac{\alpha {B^{[d+1]}}^\intercal X^{[d+1]}}{\alpha {B^{[d+1]}}^\intercal B^{[d+1]} C^{[d]} + \mu_{C} C^{[d]} + \lambda_{C} {\bm{\mathit{1}}}_{K\times T}}
\end{aligned}$$ lead to a monotonic decrease of the cost function in \[eq:NMF model BC-X\].
Similarly, for the \[eq:NMF model BC\] model we obtain the updates rules without constructing the matrix $X$ during the reconstruction process:
\[theorem:Algorithm BC\] For $A_t\in {\mathbb{R}}_{\geq 0}^{M\times N}, Y\in {\mathbb{R}}_{\geq 0}^{M\times T}$ and initialisations $ B^{[0]}\in \mathbb{R}_{> 0}^{N\times K}, C^{[0]}\in \mathbb{R}_{> 0}^{K\times T}, $ the alternating update rules $$\begin{aligned}
\scalebox{0.82}{\text{$B^{[d+1]}$ }} &=\scalebox{0.82}{\text{$ B^{[d]} \circ \dfrac{\sum_{t=1}^T A_t^\intercal Y_{\bullet, t} \cdot ({C^{[d]}}^\intercal)_{t,\bullet} + \tau P(B^{[d]}) \circ Z(B^{[d]})}{\sum_{t=1}^T A_t^\intercal A_t (B^{[d]} C^{[d]})_{\bullet, t} \cdot ({C^{[d]}}^\intercal)_{t,\bullet} + \mu_{B} B^{[d]} + \lambda_{B}{\bm{\mathit{1}}}_{N\times K} + \tau B^{[d]} \circ P(B^{[d]})}$ }} \\
C^{[d+1]}_{\bullet,t} &= C^{[d]}_{\bullet,t} \circ \dfrac{{B^{[d+1]}}^\intercal A_t^\intercal Y_{\bullet,t}}{{B^{[d+1]}}^\intercal A_t^\intercal A_t (B^{[d+1]}C^{[d]})_{\bullet,t} + \mu_{C} C^{[d]}_{\bullet,t} + \lambda_{C} {\bm{\mathit{1}}}_{K\times 1} }
\end{aligned}$$ lead to a monotonic decrease of the cost function in \[eq:NMF model BC\].
We remind that the derivation is described in Appendix \[app:sec:Derivation of the Algorithms\], which leads to the update rules in the Theorems above. Due to the multiplicative structure of the algorithms, zero entries in the matrices stay zero during the iteration scheme and can cause divisions by zero. This issue is handled via the strict positive initialisation in both Theorems. Furthermore, very small or high numbers can cause numerical instabilities and lead to undesirable results. As a standard procedure, this problem is handled by suitable projection steps after every iteration step [@cichocki09bookNMF].
Complexity Reduction for Stationary Operator {#subsec:Complexity Reduction for Stationary Operator}
--------------------------------------------
Let us now consider the case of a stationary operator, i.e. $\mathcal{A}(\cdot;t)$ in equation does not change with $t$. Then we simply write $\mathcal{A}$ or $A$ for the matrix representation in . If further the number of channels $T$ is large, the application of the forward operator represented a major computational burden per channel. In particular, we make use here of the assumption $T\gg K$, i.e. the number of channels is much larger than the basis functions for the decomposition. In this case, we can effectively reduce the computational cost by shifting the application of the forward operator to the spatial basis functions contained in $B$. That means, we make essential use of the decomposition $X\approx BC$ in the reconstruction task and as such avoid to construct the approximation to $X$. Consequently, we will only consider the case of \[eq:NMF model BC\] here. Since $A$ is independent from $t,$ the NMF model \[eq:NMF model BC\] becomes
$$\tag*{\texttt{sBC}} \label{eq:NMF model sBC}
\begin{aligned}
&\min_{B, C \geq 0} \Big\{ \frac{1}{2} \Vert ABC - Y \Vert_F^2 + \lambda_{C} \Vert C \Vert_1 + \frac{\mu_{C}}{2} \Vert C\Vert_F^2 + \lambda_{B} \Vert B \Vert_1\\
&\hspace*{9ex} + \frac{\mu_{B}}{2} \Vert B\Vert_F^2 + \dfrac{\tau}{2} \operatorname{TV}(B) \Big\}.
\end{aligned}$$
To illustrate this, let us consider the update equation in Theorem \[theorem:Algorithm BC\] for $B,$ where we can simplify the first term in the denominator as follows: $$\scalebox{0.86}{\text{$\sum_{t=1}^T A^\intercal A (B^{[d]} C^{[d]})_{\bullet, t} \cdot ({C^{[d]}}^\intercal)_{t,\bullet} = A^\intercal A \sum_{t=1}^T (B^{[d]} C^{[d]})_{\bullet, t} \cdot ({C^{[d]}}^\intercal)_{t,\bullet} = A^\intercal A B^{[d]} C^{[d]} {C^{[d]}}^\intercal.$ }}$$ The other terms in the update rules can be simplified similarly, such that we obtain the following reduced update equations:
\[cor:Algorithm sBC\] For $A\in {\mathbb{R}}_{\geq 0}^{M\times N}, Y\in {\mathbb{R}}_{\geq 0}^{M\times T}$ and initialisations $ B^{[0]}\in \mathbb{R}_{> 0}^{N\times K},$ $C^{[0]}\in \mathbb{R}_{> 0}^{K\times T}, $ the alternating update rules $$\begin{aligned}
B^{[d+1]} &= B^{[d]} \circ \dfrac{A^\intercal Y {C^{[d]}}^\intercal + \tau P(B^{[d]}) \circ Z(B^{[d]})}{A^\intercal A B^{[d]} C^{[d]} {C^{[d]}}^\intercal + \mu_{B} B^{[d]} + \lambda_{B}{\bm{\mathit{1}}}_{N\times K} + \tau B^{[d]} \circ P(B^{[d]})} \\
C^{[d+1]} &= C^{[d]} \circ \dfrac{{B^{[d+1]}}^\intercal A^\intercal Y}{{B^{[d+1]}}^\intercal A^\intercal A B^{[d+1]}C^{[d]} + \mu_{C} C^{[d]} + \lambda_{C} {\bm{\mathit{1}}}_{K\times T} }.
\end{aligned}$$ lead to a monotonic decrease of the cost function in \[eq:NMF model sBC\].
Finally, the order of application is essential here to obtain the complexity reduction. In particular, we implemented the algorithm such that $A$ acts on the basis functions in $B$. That means, we compute first the product $A^\intercal A B$ followed by multiplication with $C$. That means, we can expect a reduction of computational complexity by a factor $T/K$ with the \[eq:NMF model sBC\] model and hence is especially useful for dimension reduction under fine temporal sampling.
Application to Dynamic CT {#sec:Application to Dynamic CT}
=========================
In the following we will apply the presented methods to the use case of dynamic computerised tomography (CT). Here, the quantity of interest is given as the attenuation coefficient $x(s,t)$ at time $t\in [0,T]$ on a bounded domain in two dimensions $s\in\Omega_1\subset{\mathbb{R}}^2$. Following the formulation in , the time-dependent forward operator is given by the Radon transform $$\label{eq:DCT Continuous}
y(\theta, \sigma, t) \coloneqq (\mathcal{R}_{\mathcal{I}(t)} x(s, t))(\theta, \sigma) = \int_{s\cdot \theta = \sigma} x(s, t) {\mathop{} \! \mathrm{d}}s $$ Here, the measurement $y(\theta,\sigma,t)$ consist of line integrals over the domain $\Omega_1$ for each time point $t\in\mathcal{T}$, and is referred to as the sinogram. This measurement depends on two parameters, the angle $\theta \in S^1$ on the unit circle and a signed distance to the origin $\sigma\in{\mathbb{R}}$. Consequently, the measurements depend on a set of angles at each time step $\mathcal{I}(t)$, such that $(\theta,\sigma)\in \mathcal{I}(t)$ at time $t$, we will refer to this as the sampling patterns. In a slight abuse of notation, we will use $| \mathcal{I}(t) |$ for the number of angles, i.e. directions for the line integrals, at each time point.
In the following we consider two scenarios for the choice of angles in $\mathcal{I}(t)$ and by that defining the nature of the forward operator, as discussed in Section \[subsec:Multi–Channel Inverse Problem\]. In the general case of a nonstationary forward operator, that means the sampling patterns are time-dependent, we assume that the angles change but the amount of angles is constant over time $| \mathcal{I}(t) | \equiv c$. Additionally, we will consider the case for stationary operators, which in our setting means that the set of angles does not change over time, we can write for instance $\mathcal{I}(t) \equiv \mathcal{I}(t=0)$, and hence this leads to a stationary measurement operator of the dynamic process in . We note that even though the measurement process is stationary, the obtained measurement $y(\theta,\sigma,t)$ itself is still time dependent. For the computations, we discretise to obtain a matrix vector representation as in . In the following we will write $R_t$ for the discrete Radon transform with respect to the sampling pattern $\mathcal{I}(t)$ at time point $t$, which gives rise to the discrete reconstruction problem for dynamic CT $$\label{eq:DCT Discrete}
R_t X_{\bullet, t} = Y_{\bullet, t} \quad \text{for} \quad 1\leq t\leq T.$$ We note, that due to the definition of the Radon transform by line integrals, the matrix $R_t\in {\mathbb{R}}_{\geq 0}^{M\times N}$ has only nonnegative entries and hence satisfies the assumption for Theorem \[theorem:Algorithm BC-X\] and \[theorem:Algorithm BC\]. Furthermore, $N$ denotes here the number of pixels in the original image and $M$ is given by the product $M\coloneqq \vert\mathcal{I}(t)\vert n_S,$ where $n_S$ is the number of detection points.
Results and Discussion {#subsec:Results and Discussion}
----------------------
For a qualitative evaluation of the proposed approaches, we consider in the following sections two simulated datasets. Due to the known ground truth in both cases, we are able to measure the performance of each method via computing the mean of the and the mean of the index [@BVW12-SSIM] over all time steps for every experiment.
For each dataset, the parameters of all methods are chosen empirically to provide good reconstructions. For the models of the joint reconstruction and low-rank decomposition approach, we restrict ourselves to the total variation penalty term on $B$ to provide some denoising effect on the spatial features and the $\ell^2$ penalty on $C$ for the time features, since we expect and enforce smooth changes in time. We consider the standard case for the TV term with the default pixel neighbourhood and choose the smoothing parameter $\varepsilon_{\text{TV}} = 10^{-5}$ relatively small.
Furthermore, for both datasets we measure different angles at each time step based on a tiny golden angle sampling [@wundrak2014small] using consecutive projections with increasing angle of $\varphi = 32.039\dots$, such that projection angles are not repeated. Nevertheless, we remind that we keep the total number of observed angles constant for each time step.
For all considered approaches we use the unfiltered backprojection, given by the adjoint of the Radon transform, applied to the noisy data matrix $Y$ as the initialisation for the reconstruction matrix $X.$ In case of the NMF approaches, the matrices $B$ and $C$ are initialised via of $X$ based on [@BG08-NMFInit]. After the initialisation and at every iteration of the NMF algorithm, a suitable projection step for small values is performed to prevent numerical instabilities and zero entries during the multiplicative algorithm [@cichocki09bookNMF].
The algorithms were implemented with MATLAB^^ R2019b and run on an Intel^^ Core i7-7700K quad core CPU @4.20 GHz with 32 GB of RAM.
To this end we present a summary and short explanation of all considered algorithms in this experimental section in Table \[tab:listOfAlgorithms\].
### Shepp-Logan Phantom {#subsubsec:Shepp--Logan Phantom}
This synthetic dataset consists of a dynamic two-dimensional Shepp-Logan phantom with $T=100$ and spatial size $ 128 \times 128$, see Figure \[fig:dynSheppIllustration\] for the ground-truth. During the whole time, two of the inner ellipsoids change their intensities sinusoidally with different frequencies while the rest of the phantom remains constant.
In the following, we perform a variety of experiments for $\vert \mathcal{I}_t\vert \in \{ 2,\dots, 12\}$ with 1% and 3% Gaussian noise respectively. For all cases, we choose $K=5$ for the number of the NMF features. In such a way, the NMF is also capable to approximate minor characteristics such as noise or other artefacts of the reconstruction matrix besides the three main features.
The parameters of all methods were determined empirically and are displayed in Table \[tab:dynShepp:parameterChoice\] in Appendix \[app:sec:parameter Choice\] for both noise levels. The stopping criterion for all methods is met, if 1200 iteration steps are reached or if the relative change of all matrices $B, C$ and $X$ goes below $5\cdot 10^{-5}.$
We show first some results for the case with $\vert \mathcal{I}_t \vert =6$ and 1% Gaussian noise in Figure \[fig:dynShepp:joint:noise1:nTheta6\] for the joint NMF methods and Figure \[fig:dynShepp:sep:noise1:nTheta6\] for the separate reconstruction and extraction. The order of shown features is based on the singular values of $B$ for `gradTV_PCA` and on the $\ell_2$-norm of the spatial features for NMF approaches.
In this case, all considered approaches are able to successfully identify the constant and dynamic parts of the dataset and extract meaningful spatial and temporal features. The extracted spatial features of \[eq:NMF model BC\], \[eq:NMF model BC-X\] and `gradTV_NMF` show very clearly the dynamic and non-dynamic parts of the Shepp-Logan phantom. However, the spatial features of `gradTV_NMF` are slightly more blurred and affected by minor artefacts especially in both dynamic features. This underlines the positive effect of the separate TV regularisation on the spatial feature matrix $B$ in the joint methods. In contrast, `gradTV_PCA` is able to identify the main components of the dataset correctly, but there is a clear corruption of the dynamic features with other parts from the phantom. Furthermore, all spatial features contain negative parts due to the non-existent nonnegativity constraint of the `gradTV_PCA` approach which makes their interpretation more challenging. Hence, the additional nonnegativity constraint of the NMF methods improve significantly the quality and interpretability of the extracted components in comparison with the based extraction method.
The temporal features of all methods are clearly extracted and are consistent with the underlying ground truth of the dataset. However, we note that \[eq:NMF model BC\] and \[eq:NMF model BC-X\] have a slight difficulty to resolve the lower intensity part close to 0, which is probably caused by the multiplicative structure of the algorithms.
Similar observations can be made for the case $\vert \mathcal{I}_t \vert =6$ and 3% Gaussian noise. We present the reconstructed features in Figure \[fig:dynShepp:noise3:nTheta6\] for \[eq:NMF model BC\] and `gradTV_PCA` only. The higher amount of noise can be observed especially in the spatial features of `gradTV_PCA`, whereas it only has a slight effect in the \[eq:NMF model BC\] model.
Finally, we present the reconstructed features with \[eq:NMF model BC\] and \[eq:NMF model BC-X\] in Figure \[fig:dynShepp:joint:noise1:nTheta3\] for $\vert \mathcal{I}_t \vert =3$, i.e. only three three angles per time step with a noise level of 1%. The major difference to the previous cases can be seen in the results of the \[eq:NMF model BC\] model. Here, the method splits up the dynamics of the right ellipse into two different temporal features, such that the true dynamics are not retained. However, the \[eq:NMF model BC-X\] approach perform remarkably well with respect to the feature extraction despite the rather low number of projection angles. This might indicate, that enforcing the reconstruction $X$ to have small data error helps in the \[eq:NMF model BC-X\] model to stabilise the reconstruction in highly sparse data settings.
Let us shortly discuss other considered values of $\vert \mathcal{I}_t \vert,$ that are not shown here. First of all, the performance of `gradTV_PCA` and `gradTV_NMF` with respect to the feature extraction behaves very similar for both noise cases. Besides the above mentioned drawbacks, both approaches give remarkably consistent results especially for low number of angles and do not tend as much to split up features like in \[eq:NMF model BC\] and \[eq:NMF model BC-X\]. The latter occurs in different degrees for several numbers of angles. For 1% noise, it occurs for $\vert \mathcal{I}_t \vert \in \{ 3, 7, 8, 10\} $ in \[eq:NMF model BC\] and for $\vert \mathcal{I}_t \vert = 10 $ in \[eq:NMF model BC-X\]. In the case of a noise level of 3%, the split up effect only occurs for $\vert \mathcal{I}_t \vert = 10 $ in \[eq:NMF model BC\]. However, for $\vert \mathcal{I}_t \vert = 10, $ it is possible to partially recover the correct temporal feature by simply adding up both features. Nevertheless, both approaches provide better reconstruction quality of $X$ than `gradTV` as we will discuss in the following.
#### Quantitative Evaluation
Let us now discuss the quantitative reconstruction quality for all methods. In Figure \[fig:dynShepp:noise1:qualityMeasures\] and \[fig:dynShepp:noise3:qualityMeasures\], we show the mean and of the reconstructions for 1% and 3% noise over all time steps for all considered numbers of projection angles. Note that for the NMF model \[eq:NMF model BC-X\], we compute the quality measures for $X.$ The same goes for `gradTV`, where we only compute the quality measures of $X$ after the reconstruction procedure independently of the subsequent feature extraction method. In the case of \[eq:NMF model BC\], the reconstruction is computed as $X=BC$.
As expected, the reconstruction quality tends to get better if more angles per time step are considered. More importantly, we see that it is possible to obtain reasonable reconstructions with just a few projections per time step especially in the case of the joint reconstruction and feature extraction method via the NMF approach. In particular, we reach a stable reconstruction quality already with 5 or more angles for both joint methods and 1% noise.
![Needed time in seconds for the reconstruction and feature extraction of the dynamic Shepp-Logan phantom with 1% Gaussian noise.[]{data-label="fig:dynShepp:computationTime"}](figures/dynShepp1compTimePlot.pdf){width="50.00000%"}
The \[eq:NMF model BC\] model clearly performs best with respect to the reconstruction quality. For almost every number of angles, the mean and values outperform the ones of the \[eq:NMF model BC-X\] and `gradTV` method for both noise levels. In the case of 3% noise (see Figure \[fig:dynShepp:noise3:qualityMeasures\]) we can see that `gradTV` performs slightly better than \[eq:NMF model BC-X\] in most of the cases in terms of their values. Still, the mean values of `gradTV` are significantly lower than the ones in \[eq:NMF model BC-X\] for all numbers of angles. A selection of reconstructions for the experiments in Figure \[fig:dynShepp:noise1:qualityMeasures\] and \[fig:dynShepp:noise3:qualityMeasures\] are provided as videos in the Supplementary files. Note that for \[eq:NMF model BC-X\], it is also possible to compute the reconstruction based on the decomposition $B\cdot C$ instead of the joint reconstruction $X$ in the algorithm. Interestingly, our experiments showed that the reconstruction quality of $B\cdot C$ is in almost all cases better than the one of the matrix $X$ itself and also mostly outperforms the `gradTV` approach. We believe, that this is due to the stronger regularising effect on the components $B$ and $C$, which especially influences SSIM.
The computation times for the reconstruction and feature extraction with 1% noise for all algorithms until the stopping criterion is fulfilled are shown in Figure \[fig:dynShepp:computationTime\]. As expected, the computation time tends to increase with the number of projection angles and, considering all methods, ranges approximately from 1 to 5 minutes. For $ \vert \mathcal{I}_t \vert\leq 8, $ the \[eq:NMF model BC-X\] method is the fastest while it is outperformed by `gradTV_PCA` for $ \vert \mathcal{I}_t \vert \geq 9.$ `gradTV_NMF` and \[eq:NMF model BC\] needs more time in all experiments compared to `gradTV_PCA`. The significant temporal difference between \[eq:NMF model BC-X\] and \[eq:NMF model BC\] is due to its higher computational complexity: Owing to the model formulation of \[eq:NMF model BC\] with the discrepancy term $\Vert R_t (BC)_{\bullet, t} - Y_{\bullet, t} \Vert_2^2,$ the update rules in Theorem \[theorem:Algorithm BC\] for both matrices $B$ and $C$ contain the discretised Radon transform $R_t.$ This is in contrast to the \[eq:NMF model BC-X\] algorithm, where $R_t$ only appears in the update rule of $X.$
Based on the presented results for the dynamic Shepp-Logan phantom, we can conclude that the joint approaches \[eq:NMF model BC\] and \[eq:NMF model BC-X\] outperform both other methods with respect to the reconstruction quality and for most cases of the extracted features. Nevertheless, the models `gradTV_PCA` and `gradTV_NMF` give remarkably consistent and stable results of the extracted components throughout all numbers of angles. Furthermore, the nonnegativity constraint of the NMF improves significantly the interpretability and quality of the extracted spatial features.
#### Stationary Operator
As we have seen, the computational complexity of the \[eq:NMF model BC\] model with the non-stationary operator is clearly higher than for all other cases. Thus, let us now consider the possibility to speed up the reconstructions with a stationary operator, which leads us to the complexity reduced formulation presented in Corollary \[cor:Algorithm sBC\] as the \[eq:NMF model sBC\] model. Here, we present similarly to the case above experiments with the dynamic Shepp-Logan phantom for $\vert \mathcal{I}_t \vert \in \{ 2,\dots,30 \} $ and 1% Gaussian noise, as we primarily aim to illustrate the reduction of the computational cost. Furthermore, the same hyperparameters and stopping criteria are used as before.
The reconstructed features for the cases $\vert \mathcal{I}_t \vert = 6 $ and $\vert \mathcal{I}_t \vert = 30$ are shown in Figure \[fig:dynShepp:sBC:noise1\]. In particular, comparing the results in Figure \[fig:dynShepp\_sBC\_0.01\_nTheta6\] to the corresponding results of \[eq:NMF model BC\] in Figure \[fig:dynShepp\_BC\_0.01\_nTheta6\], one can immediately see a significant difference between the extracted spatial features. This is clearly due to the fact that the same projection angles are used at every time step and the individual projection directions are clearly visible for the stationary model \[eq:NMF model sBC\]. Consequently, the details in the Shepp-Logan phantom are not well recovered, such that the extracted constant feature is significantly inferior to the one of \[eq:NMF model BC\]. As one would expect, more projection angles per time step are needed to reconstruct finer details. This effect can be clearly seen for 30 angles in Figure \[fig:dynShepp\_sBC\_0.01\_nTheta30\].
However, all temporal basis functions with \[eq:NMF model sBC\] for $\vert \mathcal{I}_t \vert = 6 $ are remarkably well reconstructed despite the low number of projection angles. This is also true for the other considered values of $\vert \mathcal{I}_t \vert.$ Moreover, we observe that \[eq:NMF model sBC\] is able to extract the correct three main features for every $\vert \mathcal{I}_t \vert \in \{ 2,\dots,30 \}.$ Even for $\vert \mathcal{I}_t \vert = 2,$ the quality of the dynamic temporal features are similar to the ones in Figure \[fig:dynShepp:sBC:noise1\].
This behaviour is different from the dynamic case discussed above. The reason for this is probably based on the different projection directions at every time step in the dynamic case, which results in directional dependencies of the occurring reconstruction artefacts in contrast to the stationary case. This can make it difficult for the NMF to distinguish the main features in the non-stationary case and thus leads to a more stable feature extraction in the here presented stationary case.
The quantitative measures are shown in Figure \[fig:dynShepp:noise1:Stationary\] for all experiments. Comparing the computation time of \[eq:NMF model BC\] with the one of \[eq:NMF model sBC\], we obtain a clear speed-up by a factor of 10–20 with the stationary model. However, as expected, comparing Figure \[fig:dynShepp1StationaryPSNRPlot\] and \[fig:dynShepp1StationarySSIMPlot\] with the quality measures of \[eq:NMF model BC\] in Figure \[fig:dynShepp:noise1:qualityMeasures\], one can observe that significantly more projection angles per time step are needed in the stationary case to provide a sufficient reconstruction quality. In conclusion, we can say that the \[eq:NMF model sBC\] model is especially recommended if one is primarily interested in the dynamics of the system under consideration, as we could extract the temporal basis functions stably for all considered angles with $\vert \mathcal{I}_t| \geq2$.
### Vessel Phantom {#subsubsec:Vessel Phantom}
The second test case is based on a CT scan of a human lung[^5], see Figure \[fig:phantomVesselIllustration\]. Here, the decomposition is given by the constant background and a segmented vessel that exhibits a sudden increase in attenuation followed by an exponential decay. This could for instance represent the injection of a tracer to the blood stream.
![Illustration of the vessel phantom dataset consisting of $ T=100 $ phantoms of dimension $264\times264,$ where the intensity of the blue highlighted area changes according to blue curve on the left.[]{data-label="fig:phantomVesselIllustration"}](figures/phantomVesselFinal.png){width="100.00000%"}
In contrast to the previous dataset, we perform only selected experiments for specific choices of noise levels and numbers of projection angles. More precisely, we present results for 1% Gaussian noise together with $\vert \mathcal{I}_t \vert \in \{7, 12\}$ and 3% Gaussian noise with $\vert \mathcal{I}_t \vert =12.$ In all cases, we choose $K=4$ NMF features. Furthermore, the stopping criterion from the experiments with the dynamic Shepp-Logan phantom is changed for this dataset in such a way, that the maximum number of iterations is raised to 1400 to ensure sufficient convergence. The regularisation parameters of all methods are chosen empirically and are displayed in Table \[tab:phantomVessel:parameterChoice\] in Appendix \[app:sec:parameter Choice\].
Figure \[fig:phantomVessel:joint:noise1:nTheta12\] and \[fig:phantomVessel:sep:noise1:nTheta12\] show the feature extraction results for the noise level of 1% and $\vert \mathcal{I}_t \vert =12,$ where all approaches are able to extract both the main constant and dynamic component of the underlying ground truth. The order of the features here is based on a manual sorting.
Similar to the results of the Shepp-Logan phantom in Section \[subsubsec:Shepp–Logan Phantom\], the joint methods \[eq:NMF model BC\] and \[eq:NMF model BC-X\] have difficulties to recover the lower intensities in the temporal features, whereas `gradTV_PCA` produce slight artefacts in the dynamic spatial feature due to the missing nonnegativity constraint. In addition, `gradTV_NMF` is able to recover more details in the vessel compared to the joint approaches. This is due to the relatively high choice of the total variation regularisation parameter $\tau$ in \[eq:NMF model BC\] and \[eq:NMF model BC-X\] to ensure a sufficient denoising effect on the matrix $B.$ The low peak in the second temporal feature of `gradTV_NMF` is likely caused by the choice of the $\ell_2$ regularisation parameter $\tilde{\mu}_C.$
Further experiments show that the quality of the extracted components of \[eq:NMF model BC-X\] decreases steadily for lower angles until the main features cannot be identified anymore for $\vert \mathcal{I}_t \vert \leq 8.$ \[eq:NMF model BC\] produces inferior results and cannot extract reasonable components anymore for $\vert \mathcal{I}_t \vert \leq 10.$
In comparison, both separated approaches `gradTV_PCA` and `gradTV_NMF` are still able to extract decent features for $\vert \mathcal{I}_t \vert = 7.$ For $\vert \mathcal{I}_t \vert \leq 6,$ the performance of both methods decreases significantly.
Similar results for `gradTV_PCA` could be obtained for 3% noise and $\vert \mathcal{I}_t \vert = 12,$ which are shown in Figure \[fig:phantomVessel\_gradTV\_PCA\_0.03\_nTheta12\]. Its constant feature is inferior to the one of \[eq:NMF model BC\] in Figure \[fig:phantomVessel\_BC\_0.03\_nTheta12\] due to the additional nonnegativity constraint of the NMF model. However, the details of the vessel in the dynamic spatial feature of \[eq:NMF model BC\] are lost due to the choice of large regularisation parameter $\tau$ and the temporal features are affected by several disturbances. Further tests with the noise level of 3% showed that both joint methods are not able to recover the underlying features for $\vert \mathcal{I}_t \vert \leq 10,$ while the separated approaches gives still acceptable results for $\vert \mathcal{I}_t \vert = 6.$
The reconstruction quality of the experiments are shown in Table \[tab:phantomVessel:recQual\]. Similar to the Shepp-Logan phantom, the joint approach \[eq:NMF model BC\] produces the best results compared to all other methods in terms of the mean PSNR and SSIM values. Further experiments confirm this observation for $4 \leq \vert \mathcal{I}_t \vert \leq 11.$
However, these observations have to be treated with caution. \[eq:NMF model BC\] is not able to recover the dynamics for $\vert \mathcal{I}_t \vert \leq 10$ and 1% noise. In the case of \[eq:NMF model BC-X\], the dynamics can be reconstructed to some degree within the angle range $9 \leq \vert \mathcal{I}_t \vert \leq 11,$ but are not recognizable anymore for $\vert \mathcal{I}_t \vert \leq 8.$ In the case of 3% Gaussian noise, `gradTV` is still able to give acceptable reconstruction results for $\vert \mathcal{I}_t \vert = 10.$ For less angles, the reconstructed dynamics of `gradTV` get constantly worse until they are not apparent anymore for $\vert \mathcal{I}_t \vert \leq 6 .$
The computation times of the experiments in Table \[tab:phantomVessel:recQual\] range approximately from 7 to 15 minutes. The corresponding reconstructions can be found as video files in the Supplementary information.
Conclusion {#sec:Conclusion and Outlook}
==========
In this work we consider dynamic inverse problems with the assumption that the target of interest has a low-rank structure and can be efficiently represented by spatial and temporal basis functions. This assumption leads naturally to a reconstruction and low-rank decomposition framework. In particular, we concentrate here on the as decomposition because it exhibits three main advantages:
1. It naturally incorporates the physical assumption of nonnegativity
2. Basis functions are not restricted to being strictly orthogonal and therefore correspond more naturally to actual components
3. It allows the flexibility to incorporate separate regularisation on each of the factorisation matrices
In particular, the last point is of importance, as it allows to consider different regularisers for spatial and temporal basis functions, and as such can be tailored to different applications.
We then proposed two approaches to obtain a joint reconstruction and low-rank decomposition based on the , termed \[eq:NMF model BC-X\] and \[eq:NMF model BC\]. Both methods performed better than a baseline method, that computes a reconstruction with low-rank constraint followed by a subsequent decomposition. In particular, the second \[eq:NMF model BC\] model has shown to have a stronger regularising effect on the reconstructed features as well as the reconstruction, which can be simply obtained as $X=BC.$ We believe this is due to the fact, that only the decomposition is recovered during the reconstruction without the need to build the reconstruction $X$ explicitly and hence the resulting features at the end exhibit a higher regularity. More importantly, if one considers a stationary operator in the complexity reduced \[eq:NMF model sBC\] model we can obtain a considerable computational speed-up. Even though, due to constant projection angles the spatial basis functions are not as well recovered as in the non-stationary case, but the temporal features can be nicely extracted even for as low as 2 angles. This might be especially of interest in applications, where one is primarily interested in the underlying dynamics of the imaged target. The primary limitation of the presented approach is the assumption on the decomposition of the target into spatial and temporal basis functions, as this does not allow for spatial movements in the target. However, it opens up the possibility of combination with other methods, that do in fact allow for movements but assume a brightness consistency in the target, such as the optical flow constraint in CT [@burger2017variational]. Furthermore, the presented low-rank decomposition may be combined with a morphological motion model [@Gris:2019aa] to allow for a flexible and general model for dynamic inverse problems.
Acknowledgments {#sec:Acknowledgements .unnumbered}
===============
This project was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the framework of RTG “$\pi^3$: Parameter Identification – Analysis, Algorithms, Applications” – Projektnummer 281474342/\
GRK2224/1. This work was partially supported by the Academy of Finland Project 312123 (Finnish Centre of Excellence in Inverse Modelling and Imaging, 2018–2025), EPSRC grant EDCLIRS (EP/N022750/1) as well as CMIC-EPSRC platform grant (EP/M020533/1).
Optimisation Techniques for NMF Problems {#app:sec:Optimisation Techniques for NMF Problems}
========================================
The majority of optimisation techniques for NMF problems are based on alternating minimisation schemes. This is due to the fact that the corresponding cost function in is usually convex in $B$ for fixed $C$ and $C$ for fixed $B$ and non-convex in $(B, C)$ together, which yields algorithms of the form $$\begin{aligned}
B^{[d+1]} &\coloneqq \arg\min_{B\geq 0} \mathcal{F}(B, C^{[d]}), \\
C^{[d+1]} &\coloneqq \arg\min_{C\geq 0} \mathcal{F}(B^{[d+1]}, C).
\end{aligned}$$ Typical minimisation approaches are based on alternating least squares methods, multiplicative algorithms as well as projected gradient descent and quasi-newton methods [@cichocki09bookNMF]. In this work, we focus on the derivation of multiplicative update rules based on the so-called *Majorise-Minimisation* (MM) principle [@Lange13]. This approach allows the derivation of multiplicative update rules for non-standard NMF cost functions and gives therefore the flexibility to adjust the discrepancy and penalty terms according to the NMF model motivated by the corresponding application [@FM18]. What is more, the update rules consist only of multiplications and summations of matrices, which allow very simple implementations of the algorithms and ensure automatically the nonnegativity of the iterates $ B $ and $ C $ without the need of any inversion process, provided they are initialised nonnegative.
Multiplicative Algorithms {#app:subsec:Multiplicative Algorithms}
-------------------------
The works of Lee and Seung [@LS99; @LS00] brought much attention to NMF methods in general and, in particular, the multiplicative algorithms, which they derived based on the MM principle for the standard case with the Frobenius norm and the Kullback-Leibler divergence as discrepancy terms.
The main idea of the MM approach is to replace the original cost function $ \mathcal{F} $ by a majorizing so-called *surrogate function* $ \mathcal{Q}_\mathcal{F}, $ which is easier to minimise and leads to the desired multiplicative algorithms due to its tailored construction.
\[def:surrogate function\] Let $ \Omega\subset \mathbb{R}^n $ be an open subset and $ \mathcal{F}:\Omega \rightarrow \mathbb{R} $ a function. Then $ \mathcal{Q}_\mathcal{F}:\Omega \times \Omega \rightarrow \mathbb{R} $ is called a **surrogate function** or **surrogate** of $ \mathcal{F}, $ if it fulfills the following properties:
i) $ \mathcal{Q}_\mathcal{F}(x, \tilde{x}) \geq \mathcal{F}(x)$ for all $ x, \tilde{x} \in \Omega $ \[itm:def:surrogate property 1\]
ii) $ \mathcal{Q}_\mathcal{F}(x, x) = \mathcal{F}(x) $ for all $ x\in \Omega $ \[itm:def:surrogate property 2\]
The minimisation step of the MM approach is then defined by the update rule $$\label{eq:Surrogate Update Rule}
x^{[d+1]} \coloneqq \arg \min_{x\in \Omega} \mathcal{Q}_\mathcal{F}(x, x^{[d]}),$$ assuming that the $\arg \min_{x\in \Omega} \mathcal{Q}_\mathcal{F}(x, \tilde{x})$ exists for all $\tilde{x} \in \Omega. $ Due to the defining properties of a surrogate function in Definition \[def:surrogate function\], the monotonic decrease of the cost function $ \mathcal{F} $ is easily shown: $$\label{eq:LQBP Monotone Decrease}
\mathcal{F}(x^{[d+1]}) \leq \mathcal{Q}_\mathcal{F}(x^{[d+1]}, x^{[d]}) \leq \mathcal{Q}_\mathcal{F}(x^{[d]}, x^{[d]}) = \mathcal{F}(x^{[d]}).$$ This principle is also illustrated in Figure \[fig:MMPrinciple\].
![Illustration of two iteration steps of the MM principle for a cost function $\mathcal{F}$ with bounded curvature and a surrogate function $\mathcal{Q}_\mathcal{F},$ which is strictly convex in the first argument.[]{data-label="fig:MMPrinciple"}](figures/mm-principle_new.pdf){width="100.00000%"}
Typical construction techniques lead to surrogate functions, which are strictly convex in the first component to ensure the unique existence of the corresponding minimiser. Furthermore, the surrogates must be constructed in such a way, that the minimisation in Equation yields multiplicative updates to ensure the nonnegativity of the matrix iterates. Finally, another useful property is the separability of $\mathcal{Q}_\mathcal{F}$ with respect to the first variable. This ensures, that $\mathcal{Q}_\mathcal{F}(x,\tilde{x})$ can be written as a sum, where each component just depends on one entry of $ x $ and allows the derivation of the multiplicative algorithm via the zero gradient condition $\nabla_x \mathcal{Q}_\mathcal{F} = 0.$
One typical construction method is the so-called *Quadratic Upper Bound Principle* (QUBP) [@BL88; @Lange13], which forms one of the main approaches to construct suitable surrogate functions for NMF problems. Nice overviews of other construction principles, which will not be used in this work, can be found in [@Lange13; @LHY00]. The QUBP is described in the following Lemma.
\[lem:Quadratic Upper Bound Principle\] Let $ \Omega\subset \mathbb{R}^n $ be an open and convex subset, $ \mathcal{F}:\Omega \rightarrow \mathbb{R} $ twice continuously differentiable with bounded curvature, i.e. there exists a matrix $ \Lambda\in \mathbb{R}^{n\times n}, $ such that $ \Lambda - \nabla^2\mathcal{F}(x) $ is positive semi-definite for all $ x\in \Omega. $ We then have $$\begin{aligned}
\mathcal{F}(x) &\leq \mathcal{F}(\tilde{x}) + \nabla \mathcal{F}(\tilde{x})^\intercal (x-\tilde{x}) + \dfrac{1}{2} (x-\tilde{x})^\intercal \Lambda (x-\tilde{x}) \quad \forall x, \tilde{x} \in \Omega \\
&\eqqcolon \mathcal{Q}_\mathcal{F}(x, \tilde{x}), \nonumber
\end{aligned}$$ where $ \mathcal{Q}_\mathcal{F} $ is a surrogate function of $ \mathcal{F}. $
This is a classical result based on the second-order Taylor polynomial and will not be proven here.
If the matrix $ \Lambda $ is additionally symmetric and positive definite, it can be shown [@FM18] that the update rule for $ x $ according to via the zero gradient condition $ \nabla_{x}\mathcal{Q}_\mathcal{F}(x, \tilde{x})=0 $ gives the unique minimiser $$\label{eq:QUBP Update Rule}
x_{\tilde{x}}^* = \tilde{x} - \Lambda^{-1} \nabla \mathcal{F}(\tilde{x}).$$ In this work, we will only apply the QUBP for quadratic cost functions $F,$ whose Hessian is automatically a constant matrix. For these functions, typical choices of $ \Lambda $ are diagonal matrices of the form $$\label{eq:Lambda Matrix}
\Lambda (\tilde{x})_{i i} \coloneqq \dfrac{(\nabla^2 f (\tilde{x})\ \tilde{x})_i + \kappa_i}{\tilde{x}_i},$$ which are dependent on the second argument of the corresponding surrogate $\mathcal{Q}_\mathcal{F}(x, \tilde{x}).$ The parameters $ \kappa_i\geq 0, $ are constants and have to be chosen depending on the considered penalty terms of the NMF cost function.
The diagonal structure of $ \Lambda(\tilde{x}) $ ensures its simple invertibility, the separability of the corresponding surrogate and the desired multiplicative algorithms based on . Hence, the update rule in can be viewed as a gradient descent approach with a suitable stepsize defined by the diagnoal matrix $ \Lambda. $
Derivation of the Algorithms {#app:sec:Derivation of the Algorithms}
============================
In this section, we derive the multiplicative update rules for the NMF minimisation problems in \[eq:NMF model BC-X\] and \[eq:NMF model BC\].
Model \[eq:NMF model BC-X\] {#app:subsec:Model BC-X}
---------------------------
### Algorithm for X {#app:subsubsec:Algorithm for X (BC-X)}
We start first of all with the NMF model \[eq:NMF model BC-X\] and the minimisation with respect to $X.$ The cost function of the NMF problem in \[eq:NMF model BC-X\] for the minimisation with respect to $ X $ reduces to $$\label{eq:Cost Function F wrt X}
\mathcal{F}(X) \coloneqq \underbrace{\sum_{t=1}^{T} \frac{1}{2} \Vert A_t X_{\bullet, t} - Y_{\bullet, t} \Vert_2^2 + \frac{\mu_{X}}{2} \Vert X\Vert_F^2 + \lambda_{X} \Vert X \Vert_1}_{\eqqcolon \mathcal{F}_1(X)} + \underbrace{\frac{\alpha}{2} \Vert X - BC \Vert_F^2}_{\eqqcolon \mathcal{F}_2(X)}$$ by neglecting the constant terms. To apply the QUBP and to avoid fourth-order tensors during the computation of the Hessians, we use the separability of $ \mathcal{F}_1 $ with respect to the columns of $ X, $ i.e. it can be written as sum, where each term depends only on the respective column $X_{\bullet, t}.$ Hence, we write $$\mathcal{F}_1(X) = \sum_{t=1}^{T} \left [ \frac{1}{2} \Vert A_t X_{\bullet, t} - Y_{\bullet, t} \Vert_2^2 + \frac{\mu_{X}}{2} \Vert X_{\bullet, t}\Vert_2^2 + \lambda_{X} \Vert X_{\bullet, t} \Vert_1 \right ] \eqqcolon \sum_{t=1}^{T} f_t(X_{\bullet, t}).$$ We can assume that $ X $ contains only strictly positive entries due to the strict positive initialisations of the multiplicative algorithms. Hence, the functions $ f_t $ are twice continuously differentiable despite the occuring $ \ell^1 $ regularisation term. The computations of the gradient and the Hessian of $ f_t $ are straightforward and we obtain $$\begin{aligned}
\nabla f_t(X_{\bullet, t}) &= A_t^\intercal A_t X_{\bullet, t} - A_t^\intercal Y_{\bullet, t} + \mu_{X}X_{\bullet, t} + \lambda_{X} {\bm{\mathit{1}}}_{N\times 1},\\
\nabla^2 f_t(X_{\bullet, t})&= A_t^\intercal A_t + \mu_{X} I_{N\times N},
\end{aligned}$$ where $ I_{N\times N} $ is the $ N\times N $ identity matrix. Choosing $ \kappa_n=\lambda_{X} $ for all $ n $ in , we define the surrogate $ \mathcal{Q}_{f_t} $ according to Lemma \[lem:Quadratic Upper Bound Principle\]. It is then easy to see, that $$\mathcal{Q}_{\mathcal{F}_1}(X, \tilde{X}) \coloneqq \sum_{t=1}^T \mathcal{Q}_{f_t} (X_{\bullet, t}, \tilde{X}_{\bullet, t})$$ defines a separable and convex surrogate function for $ \mathcal{F}_1. $ For $ \mathcal{F}_2, $ we set simply $ \mathcal{Q}_{\mathcal{F}_2}(X,\tilde{X}) \coloneqq \nicefrac{\alpha}{2} \Vert X-BC\Vert_F^2, $ such that we end up with $$\mathcal{Q}_\mathcal{F}(X, A) \coloneqq \mathcal{Q}_{\mathcal{F}_1}(X,A) + \mathcal{Q}_{\mathcal{F}_2}(X,A)$$ as a suitable surrogate for $ F. $ Based on the update rule in , we consider the zero gradient condition $ \nabla_{X}\mathcal{Q}_\mathcal{F}(X, \tilde{X})=0 $ and compute $$\begin{aligned}
\dfrac{\partial \mathcal{Q}_F}{\partial X_{nt}}(X,\tilde{X}) &=\dfrac{\partial f_t}{\partial X_{nt}} (\tilde{X}_{\bullet,t}) + \left ( \Lambda(\tilde{X}_{\bullet,t}) (X_{\bullet, t} - \tilde{X}_{\bullet, t}) \right )_n + \dfrac{\alpha}{2} \dfrac{\partial}{\partial X_{nt}} \Vert X-BC \Vert_F^2 \\
&=\left ( A_t^\intercal A_t \tilde{X}_{\bullet, t} \right)_n - \left( A_t^\intercal Y_{\bullet, t} \right)_n + \mu_{X} \tilde{X}_{nt} + \lambda_{X} \\
&+{\scalebox{0.93}{\text{$\displaystyle \dfrac{\left ( (A_t^\intercal A_t + \mu_{X} I_{N\times N})\tilde{X}_{\bullet,t} \right )_n + \lambda_{X}}{\tilde{X}_{nt}}(X_{nt} - \tilde{X}_{nt} ) + \alpha (X_{nt} - (BC)_{nt})$}}} \\
&={\scalebox{0.93}{\text{$\displaystyle - \left( A_t^\intercal Y_{\bullet, t} \right)_n + X_{nt} \dfrac{\left ( A_t^\intercal A_t \tilde{X}_{\bullet, t} \right)_n + \mu_{X} \tilde{X}_{nt} + \lambda_{X}}{\tilde{X}_{nt}} + \alpha (X_{nt} - (BC)_{nt})$}}} \\
&=0.
\end{aligned}$$ Rearranging the equation leads to $$X_{nt} = \dfrac{\left( A_t^\intercal Y_{\bullet, t} \right)_n + \alpha (BC)_{nt}}{ \dfrac{\left ( A_t^\intercal A_t \tilde{X}_{\bullet, t} \right)_n + \mu_{X}\tilde{X}_{nt} + \lambda_{X}}{\tilde{X}_{nt}} + \alpha}.$$ We therefore have $$X_{\bullet, t} = \tilde{X}_{\bullet, t} \circ \dfrac{A_t^\intercal Y_{\bullet, t} + \alpha BC_{\bullet, t}}{A_t^\intercal A_t \tilde{X}_{\bullet, t} + (\mu_{X} + \alpha)\tilde{X}_{\bullet, t} + \lambda_{X} {\bm{\mathit{1}}}_{N\times 1}},$$ which yields the multiplicative update rule $$X_{\bullet, t} \leftarrow X_{\bullet, t} \circ \dfrac{A_t^\intercal Y_{\bullet, t} + \alpha BC_{\bullet, t}}{A_t^\intercal A_t X_{\bullet, t} + (\mu_{X} + \alpha)X_{\bullet, t} + \lambda_{X} {\bm{\mathit{1}}}_{N\times 1}}$$ based on . Note that the correct choice of the matrix $ \Lambda $ together with the $ \kappa_i $ is crucial to ensure the multiplicative structure of the algorithm.
### Algorithm for B {#app:subsubsec:Algorithm for B (BC-X)}
The minimisation with respect to $ B $ reduces the cost function in \[eq:NMF model BC-X\] to $$\label{eq:Cost Function F wrt B}
\mathcal{F}(B) \coloneqq \underbrace{\frac{\alpha}{2} \Vert BC - X \Vert_F^2 + \frac{\mu_{B}}{2} \Vert B\Vert_F^2 + \lambda_{B} \Vert B \Vert_1}_{\eqqcolon \mathcal{F}_1(B)} + \underbrace{\dfrac{\tau}{2} \operatorname{TV}(B)}_{\eqqcolon \mathcal{F}_2(B)}$$ and involves the TV regularisation on $ B $ of the NMF model. Analogously to the previous section, we use the separability of $ \mathcal{F}_1 $ and write $$\begin{aligned}
{\scalebox{0.95}{\text{$\displaystyle \mathcal{F}_1(B) = \sum_{n=1}^{N} \Big[ \dfrac{\alpha}{2} \Vert X_{n,\bullet} - B_{n,\bullet} C \Vert_F^2 + \dfrac{\mu_{B}}{2} \Vert B_{n,\bullet} \Vert_2^2 + \lambda_{B} \Vert B_{n,\bullet}\Vert_1 \Big] \eqqcolon \sum_{n=1}^{N} f_n(B_{n,\bullet}).$}}}
\end{aligned}$$ By computing the gradients $$\begin{aligned}
\nabla f_n(B_{n,\bullet}) &= \alpha (B_{n,\bullet} C - X_{n,\bullet}) C^\intercal + \mu_{B} B_{n,\bullet} + \lambda_{B} {\bm{\mathit{1}}}_{1\times K} \\
\nabla^2 f_n(B_{n,\bullet}) &= \alpha CC^\intercal + \mu_{B} I_{K\times K}
\end{aligned}$$ and choosing $ \kappa_k = \lambda_{B} $ in , we define analogously the surrogates $ \mathcal{Q}_{f_n}, $ which leads to the convex surrogate $$\mathcal{Q}_{\mathcal{F}_1} (B, \tilde{B}) \coloneqq \sum_{n=1}^{N} \mathcal{Q}_{f_n}(B_{n,\bullet}, \tilde{B}_{n,\bullet})$$ for $ \mathcal{F}_1. $ The derivation of a suitable surrogate for the TV regularisation term $\mathcal{F}_2$ is based on an approach different from the QUBP and shall not be discussed in detail. We just state the result and refer the reader for details to [@OBF09; @defrise11TV; @FM18]. A convex and separable surrogate function for $ \mathcal{F}_2 $ is given by $$\label{eq:surrogate:TV}
\mathcal{Q}_{\mathcal{F}_2}(B, \tilde{B}) = \dfrac{\tau}{2} \sum_{k=1}^K \sum_{n=1}^N \left [ P(\tilde{B})_{nk} (B_{nk} - Z(\tilde{B})_{nk})^2 \right ] + \mathcal{G}(\tilde{B}),$$ with the matrices $ P(\tilde{B}), Z(\tilde{B})\in \mathbb{R}_{\geq 0}^{N\times K} $ defined in and and a function $ \mathcal{G} $ depending only on the matrix $ \tilde{B}.$ Hence, we finally end up with $ \mathcal{Q}_\mathcal{F}(B, \tilde{B}) \coloneqq \mathcal{Q}_{\mathcal{F}_1}(B, \tilde{B}) + \mathcal{Q}_{\mathcal{F}_2}(B, \tilde{B}) $ as a suitable surrogate for $\mathcal{F}$.
Similar to the computations in the previous paragraph, the zero gradient condition yields then $${\scalebox{0.83}{\text{$\displaystyle \dfrac{\partial \mathcal{Q}_\mathcal{F}}{\partial B_{nk}}(B, \tilde{B}) = - \alpha (XC^\intercal)_{nk} + B_{nk} \dfrac{\alpha (\tilde{B}CC^\intercal)_{nk} \!+\! \mu_{{\bm{\mathit{B}}}}\tilde{B}_{nk} \!+\! \lambda_{{\bm{\mathit{B}}}}}{\tilde{B}_{nk}} + \tau P(\tilde{B})_{nk} (B_{nk} \!-\! Z(\tilde{B})_{nk}) = 0$}}}
$$ and therefore $$B_{nk} = \tilde{B}_{nk} \cdot \dfrac{\alpha (XC^\intercal)_{nk} + \tau P(\tilde{B})_{nk}Z(\tilde{B})_{nk} }{\alpha (\tilde{B}CC^\intercal)_{nk} + \mu_{B} \tilde{B}_{nk} + \lambda_{B} + \tau P(\tilde{B})_{nk}\tilde{B}_{nk} }.$$ Hence, we have the update rule $$B \leftarrow B \circ \dfrac{\alpha XC^\intercal + \tau P(B) \circ Z(B) }{\alpha BCC^\intercal + \mu_{B} B + \lambda_{B} {\bm{\mathit{1}}}_{N\times K} + \tau P(B) \circ B }.$$
### Algorithm for C {#app:subsubsec:Algorithm for C (BC-X)}
The optimisation with respect to the matrix $ C $ can be tackled analogously with the QUBP and will not be described in detail. In this case, the cost function can be reduced to well-known regularised NMF problems [@demol12], which leads to the update rule $$\begin{aligned}
C \leftarrow C \circ \dfrac{\alpha B^\intercal X}{\alpha B^\intercal BC + \mu_{C} C + \lambda_{C} {\bm{\mathit{1}}}_{K\times T}}.
\end{aligned}$$
Model \[eq:NMF model BC\] {#app:subsec:Model BC}
-------------------------
In this section, we discuss the computation of the optimisation algorithms for the NMF model \[eq:NMF model BC\].
### Algorithm for B {#app:subsubsec:Algorithm for B (BC)}
In this case, the cost function reduces to $$\begin{aligned}
\mathcal{F}(B) \coloneqq \underbrace{\sum_{t=1}^{T} \dfrac{1}{2} \Vert A_t (BC)_{\bullet,t} - Y_{\bullet, t} \Vert_2^2 + \dfrac{\mu_{B}}{2} \Vert B \Vert_F^2 + \lambda_{B} \Vert B\Vert_1}_{\eqqcolon \mathcal{F}_1(B)} + \underbrace{\dfrac{\tau}{2} \operatorname{TV}(B)}_{\eqqcolon \mathcal{F}_2(B)}.
\end{aligned}$$ Analogously to the previous cases, we analyze the functions $ \mathcal{F}_1 $ and $ \mathcal{F}_2 $ separately. The difference is here, that $ \mathcal{F}_1 $ is not separable with respect to the rows of $ B $ due to the discrepancy term and therefore, it is necessary to compute the gradient and the Hessian of the whole function $ \mathcal{F}_1. $ Hence, the gradient $ \nabla \mathcal{F}_1(B)$ is an $ N\times K $ matrix and the Hessian $ \nabla^2 \mathcal{F}_1(B) $ a fourth-order tensor, which are given by their entries $$\begin{aligned}
\nabla \mathcal{F}_1(B)_{n k} &= {\scalebox{0.94}{\text{$\displaystyle \sum_{t=1}^T C_{k t} \left( A_t^\intercal A_t (B C)_{\bullet, t} \right)_{n} - \sum_{t=1}^T C_{k t} \left(A_t^\intercal Y_{\bullet, t}\right)_{n} + \mu_{B}B_{n k} + \lambda_{B}$}}},\\
\nabla^2 \mathcal{F}_1(B)_{(n, k), (\tilde{n}, \tilde{k})} &= \sum_{t=1}^T C_{\tilde{k} t} C_{k t} (A_t^\intercal A_t)_{n \tilde{n}} + \mu_{B} \delta_{(n, k), (\tilde{n}, \tilde{k})},
\end{aligned}$$ where $ \delta_{(n, k), (\tilde{n}, \tilde{k})}=1 $ if and only if $ (n, k) = (\tilde{n}, \tilde{k}). $ The natural expansion of the quadratic upper bound principle given in Lemma \[lem:Quadratic Upper Bound Principle\] is the ansatz function $$\begin{aligned}
\mathcal{Q}_{\mathcal{F}_1}(B, \tilde{B}) &\coloneqq \mathcal{F}_1(\tilde{B}) + \langle B-\tilde{B}, \nabla \mathcal{F}_1(\tilde{B})\rangle_F \\
&+ \dfrac{1}{2} \sum_{(n, k)} \sum_{(\tilde{n}, \tilde{k})} (B-\tilde{B})_{n k} \Lambda(\tilde{B})_{(n, k), (\tilde{n}, \tilde{k})} (B-\tilde{B})_{\tilde{n} \tilde{k}}
\end{aligned}$$ with the fourth order tensor $$\Lambda(\tilde{B})_{(n, k), (\tilde{n}, \tilde{k})} \coloneqq
\begin{cases}
\dfrac{\sum_{(i, j)} \nabla^2 \mathcal{F}_1(\tilde{B})_{(n, k), (i, j)} \tilde{B}_{ij} + \lambda_{B}}{\tilde{B}_{n k}} &\text{for} \quad (n, k) = (\tilde{n}, \tilde{k}), \\
0 &\text{for} \quad (n, k) \neq (\tilde{n}, \tilde{k}),
\end{cases}$$ where $ \langle \cdot, \cdot \rangle_F $ denotes the Frobenius inner product.
Taking the same surrogate $ \mathcal{Q}_{\mathcal{F}_2} $ for the TV penalty term as in , we end up with the surrogate function $$\mathcal{Q}_\mathcal{F}(B, \tilde{B}) \coloneqq \mathcal{Q}_{\mathcal{F}_1}(B, \tilde{B}) + \mathcal{Q}_{\mathcal{F}_2}(B, \tilde{B})$$ for $ \mathcal{F}. $ Its partial derivative with respect to $ B_{nk} $ is given by $$\begin{aligned}
\dfrac{\partial \mathcal{Q}_\mathcal{F}}{\partial B_{nk}}(B) &= {\scalebox{0.94}{\text{$\displaystyle - \sum_{t=1}^T C_{k t} \left(A_t^\intercal Y_{\bullet, t}\right)_{n} + B_{nk} \frac{\sum_{t=1}^T C_{k t} \left( A_t^\intercal A_t (\tilde{B} C)_{\bullet, t} \right)_{n} + \mu_B \tilde{B}_{nk} + \lambda_B}{\tilde{B}_{nk}}$}}} \\
&+ \tau P(\tilde{B})_{nk} (B_{nk} - Z(\tilde{B})_{nk}).
\end{aligned}$$ The zero-gradient condition gives then the equation $$\begin{aligned}
B_{nk} = \tilde{B}_{nk} \Bigg( \dfrac{\sum_{t=1}^T C_{k t} \left(A_t^\intercal Y_{\bullet, t}\right)_{n} + \tau P(\tilde{B})_{nk} Z(\tilde{B})_{nk}}{\sum_{t=1}^T C_{k t} \left( A_t^\intercal A_t (\tilde{B} C)_{\bullet, t} \right)_{n} + \mu_{B} \tilde{B}_{nk} + \lambda_{B} + \tilde{B}_{nk} \tau P(\tilde{B})_{nk}} \Bigg),
\end{aligned}$$ which can be extended to the whole matrix $ B. $ Therefore, based on , we have the update rule $$B \leftarrow B \circ \Bigg( \dfrac{\sum_{t=1}^T A_t^\intercal Y_{\bullet, t} (C^\intercal)_{t,\bullet} + \tau P(B) \circ Z(B)}{\sum_{t=1}^T A_t^\intercal A_t (B C)_{\bullet, t} \cdot (C^\intercal)_{t,\bullet} + \mu_{B} B + \lambda_{B}{\bm{\mathit{1}}}_{N\times K} + \tau B \circ P(B)} \Bigg).$$
### Algorithm for C {#app:subsubsec:Algorithm for C (BC)}
In this case, the cost function is separable with respect to the columns of $C,$ such that $$\begin{aligned}
\mathcal{F}(C) &\coloneqq \sum_{t=1}^{T} \dfrac{1}{2} \Vert A_t BC_{\bullet,t} - Y_{\bullet, t} \Vert_2^2 + \dfrac{\mu_{C}}{2} \Vert C_{\bullet,t} \Vert_2^2 + \lambda_{C} \Vert C_{\bullet,t} \Vert_1 \eqqcolon \sum_{t=1}^T f_t(C_{\bullet,t}).
\end{aligned}$$ Hence, we can split the minimisation into the columns of $ C $ to use the standard QUBP without considering higher order tensors. We compute $$\begin{aligned}
\nabla f_t (C_{\bullet,t}) &= B^\intercal A_t^\intercal A_t (B C)_{\bullet,t} - B^\intercal A_t^\intercal Y_{\bullet,t} + \mu_{C} C_{\bullet,t} + \lambda_{C} {\bm{\mathit{1}}}_{K\times 1},\\
\nabla^2 f_t (C_{\bullet,t}) &= B^\intercal A_t^\intercal A_t B + \mu_{C} I_{K\times K}.
\end{aligned}$$ By choosing $ \kappa_k=\lambda_{C} $ for all $ k $ in , we define $\mathcal{Q}_{f_t} (C_{\bullet,t}, \tilde{C}_{\bullet,t})$ as a surrogate function for $f_t$ according to Lemma \[lem:Quadratic Upper Bound Principle\]. The update rule in gives then $$C_{\bullet,t} = \tilde{C}_{\bullet,t} - \Lambda^{-1}(\tilde{C}_{\bullet,t}) \nabla f_t (\tilde{C}_{\bullet,t}),$$ which leads to $$C_{\bullet,t} \leftarrow C_{\bullet,t} \circ \dfrac{B^\intercal A_t^\intercal Y_{\bullet,t}}{B^\intercal A_t^\intercal A_t (BC)_{\bullet,t} + \mu_{C} C_{\bullet,t} + \lambda_{C} {\bm{\mathit{1}}}_{K\times 1} }.$$
Parameter Choice {#app:sec:parameter Choice}
================
[^1]: Department of Computer Science, University College London, London, United Kingdom (<S.Arridge@cs.ucl.ac.uk>).
[^2]: Center for Industrial Mathematics, University of Bremen, Bremen, Germany (<pfernsel@math.uni-bremen.de>). To whom correspondence should be addressed.
[^3]: Department of Mathematical Sciences, University of Oulu, Oulu, Finland; Department of Computer Science, University College London, London, United Kingdom (<andreas.hauptmann@oulu.fi>).
[^4]: <https://www.mathworks.com/matlabcentral/fileexchange/36278-split-bregman-method-for-total-variation-denoising>
[^5]: The phantom is based on the CT scans in the *ELCAP Public Lung Image database*: <http://www.via.cornell.edu/lungdb.html>
|
---
abstract: 'One of the recent no-go theorems on [$\Psi$-epistemic ]{}interpretations of quantum proves that there are no ‘maximally epistemic’ interpretations of quantum theory. The proof utilises similar arrangements to Clifton’s quantum contextuality proof and has parallels to Harrigan and Rudolph’s quantum deficiency no-go theorem, itself based on the Kochen-Specker quantum contextuality proof. This paper shows how the Kochen-Specker theorem can also be turned into a no ‘maximally epistemic’ theorem, but of a more limited kind.'
author:
- 'O. J. E. Maroney'
title: 'A brief note on epistemic interpretations and the Kochen-Specker theorem.'
---
In [@Maroney2012b], a no-go theorem is proved, regarding [$\Psi$-epistemic ]{}interpretations of quantum theory[@Spekkens2007; @HS2007; @LJBR2012]. The theorem states that there are no ‘maximally epistemic’ interpretations of quantum theory and that in the limit of large Hilbert spaces no more than half of the overlap between quantum states could be accounted for by epistemic uncertainty. The proof is shown to be robust against experimental noise. The simplest form of the proof makes use of the same experimental arrangement as Clifton’s proof[@Clifton1993] of quantum contextuality.
A very simple proof that there are no ‘maximally epistemic’ interpretations of quantum theory can also be obtained from the Kochen-Specker theorem[@KS1967], in a manner that parallels Harrigan and Rudolph’s ‘quantum deficiency’ theorem[@HR2007]. This proof is presented here. However, unlike the theorem of [@Maroney2012b], this proof does not set any bound on how epistemic a theory could become, and would not appear to be robust against finite precision loopholes.
The proof uses the ontological models framework[@HS2007]:
1. After a quantum state ${\ensuremath{\left| \psi \right\rangle}}$ is prepared the system is actually in a physical state $\lambda$, called the *ontic* state. $\lambda$ occurs with probability $\mu_{\psi}(\lambda)$. $$\begin{array}{c}
\mu_{\psi}(\lambda) \ge 0 \\
\int \mu_{\psi}(\lambda) d\lambda=1
\end{array}$$
2. A measurement procedure, $M$, has a number of possible outcomes $\{Q\}$, and has a probability, $\xi_M(Q|\lambda)$, of obtaining a particular outcome $Q$, given the ontic state $\lambda$. The preparation procedure only influences the measurement outcomes indirectly, through the possible physical states prepared.:
$$\begin{array}{c}
\xi_M(Q|\lambda) \ge 0 \\
\sum_Q \xi_M(Q| \lambda) =1
\end{array}$$
3. An ontological model will reproduce the results of quantum theory if, and only if: $$\int \mu_\psi(\lambda)\xi_M(Q|\lambda)d\lambda
=
{\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {Q} \right.}}{\ensuremath{\left| {\psi} \right\rangle}}}} \right|}}^2}}$$
The set $\Lambda_{\phi}=\{\lambda : \mu_{\phi}(\lambda)>0\}$ represents the set of all possible ontic states which may occur when preparing the quantum state ${\ensuremath{\left| \phi \right\rangle}}$. By contrast, $\Lambda^{\phi}=\{\lambda : \xi_{M}(\phi|\lambda)>0\}$ represents the set of all possible ontic states which may reveal the measurement outcome ${{{\ensuremath{\left| {{\phi}} \right\rangle}}{\ensuremath{\left. {\ensuremath{\left\langle {{\phi}} \right.}} \right|}}}}$ when measuring $M$. While it is clearly the case that $\Lambda_{\phi}\subseteq \Lambda^{\phi}$, the essence of Harrigan and Rudolph’s theorem is that, for any ontological model for quantum theory, there must exist $\varphi$ such that $\Lambda_{\varphi}\subset \Lambda^{\varphi}$.
As $\int \mu_{\phi}(\lambda)\xi_M(\phi|\lambda)d\lambda=
{\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\phi} \right.}}{\ensuremath{\left| {\phi} \right\rangle}}}} \right|}}^2}}=1$ $$\forall \lambda \in \Lambda_{\phi} \;\;\; \xi_M(\phi|\lambda)=1$$
If ${{\ensuremath{\left\langle {\psi} \right.}}{\ensuremath{\left| {\phi} \right\rangle}}}=0$ then $
\int \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda
=
{\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\phi} \right.}}{\ensuremath{\left| {\psi} \right\rangle}}}} \right|}}^2}}=0
$ and $$\forall \lambda \in \Lambda_{\psi} \;\;\; \xi_M(\phi|\lambda)=0$$ which implies[^1] $\Lambda_{\psi} \cap \Lambda_{\phi}=\emptyset$ .
If ${\ensuremath{\left| \psi \right\rangle}}$ and ${\ensuremath{\left| \phi \right\rangle}}$ are non-orthogonal, then: $$\int \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda
=
{\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\phi} \right.}}{\ensuremath{\left| {\psi} \right\rangle}}}} \right|}}^2}} \neq 0$$ This allows the possibility that $\Lambda_{\psi} \cap \Lambda_{\phi}$ is not empty.
A [$\Psi$-epistemic ]{}theory is one in which there exist at least two distinct non-orthogonal quantum states, ${\ensuremath{\left| \psi \right\rangle}}$ and ${\ensuremath{\left| \phi \right\rangle}}$, for which $\Lambda_{\psi} \cap \Lambda_{\phi} \neq \emptyset$.
There is a bound on the measure of the epistemic overlap. As $\forall \lambda \in \Lambda_{\phi} \;\;\; \xi_M(\phi|\lambda)=1$ $$\begin{array}{rl}
\int_{\Lambda_{\phi}} \mu_{\psi}(\lambda) d\lambda& =\int_{\Lambda_{\phi}} \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda \\
& \le \int \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda={\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\phi} \right.}}{\ensuremath{\left| {\psi} \right\rangle}}}} \right|}}^2}}
\end{array}$$
In a *maximally* [$\Psi$-epistemic ]{}theory, for any two quantum states: $$\int_{\Lambda_{\phi}} \mu_{\psi}(\lambda)d\lambda
=
{\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\phi} \right.}}{\ensuremath{\left| {\psi} \right\rangle}}}} \right|}}^2}}$$ It is not hard to see that this will require $$\int_{\Lambda_{\phi}} \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda = \int \mu_{\psi}(\lambda)\xi_M(\phi|\lambda)d\lambda$$ and hence that $\Lambda_{\phi}=\Lambda^{\phi}$, showing the connection to the quantum deficiency theorem.
Now take an arbitrary state ${\ensuremath{\left| \Psi \right\rangle}}$ and expand it in an arbitrary basis $\{{\ensuremath{\left| \alpha_i \right\rangle}}\}$ $${\ensuremath{\left| \Psi \right\rangle}}=\sum_i {{\ensuremath{\left\langle {\Psi} \right.}}{\ensuremath{\left| {\alpha_i} \right\rangle}}}{\ensuremath{\left| \alpha_i \right\rangle}}$$ For each overlap: $$\int_{\Lambda_{\alpha_i}} \mu_{\Psi}(\lambda)={\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\Psi} \right.}}{\ensuremath{\left| {\alpha_i} \right\rangle}}}} \right|}}^2}}$$ so (as ${{\ensuremath{\left\langle {\alpha_i} \right.}}{\ensuremath{\left| {\alpha_j} \right\rangle}}}=0$, then $\Lambda_{\alpha_i}\cap \Lambda_{\alpha_j}=\emptyset$) $$\int_{\cup_i\Lambda_{\alpha_i}} \mu_{\Psi}(\lambda)=\sum_i {\ensuremath{{\ensuremath{\left| {{{\ensuremath{\left\langle {\Psi} \right.}}{\ensuremath{\left| {\alpha_i} \right\rangle}}}} \right|}}^2}}=1$$ which implies $\Lambda_{\Psi} \subseteq \cup_i\Lambda_{\alpha_i}$, up to a set of measure zero.
However all states in $\cup_i \Lambda_{\alpha_i}$ satisfy $\xi_M (\alpha_j| \lambda) \in \{0,1\}$ for any $M$. As this must hold for arbitrary ${\ensuremath{\left| \Psi \right\rangle}}$ and arbitrary bases, in a maximally [$\Psi$-epistemic ]{}theory all ontic states are non-contextually value definite for all projectors. This immediately contradicts the Kochen Specker theorem, which proves there is no model of quantum theory which is non-contextually value definite, in a Hilbert space of dimension $d \ge 3$.
Although this provides a proof that maximally [$\Psi$-epistemic ]{}theories are not possible, unlike[@Maroney2012b] it does not establish an empirically testable bound on the question: how close to maximally [$\Psi$-epistemic ]{}can one get? Further, the Kochen-Specker theorem allows a finite precision loophole[@Meyer1999; @Kent1999; @CK2001; @BK2004; @Hermens2011], that can be exploited to allow non-contextual theories to get arbitrarily close to quantum statistics, so it seems unlikely that this proof could be made robust against experimental error.
**Acknowledgements** I would like to thank Chris Timpson for helpful discussions. This research is supported by the John Templeton Foundation.
[LJBR12]{}
J Barrett and A Kent. Non-contextuality, finite precision measurement and the [K]{}ochen–[S]{}pecker theorem. , 35:151–176, 2004.
R K Clifton and A Kent. Simulating quantum mechanics by non-contextual hidden variables. , 456:2101–2114, 2001.
R K Clifton. Getting contextual and nonlocal elements-of-reality the easy way. , 61:443–447, 1993.
R Hermens. The problem of contextuality and the impossibility of experimental metaphysics thereof. , 42:214–225, 2011.
N Harrigan and T Rudolph. Ontological models and the interpretation of contextuality. , 2007. arxiv.org://quant-ph/0709.4266.
N Harrigan and R W Spekkens. Einstein, incompleteness and the epistemic view of quantum states. , 40:125, 2010. arXiv:0706.266.
A Kent. Noncontextual hidden variables and physical measurements. , 83(19):3755–3757, 1999.
S Kochen and E Specker. The problem of hidden variables in quantum mechanics. , 17:59–87, 1967.
P G Lewis, D Jennings, J Barrett, and T Rudolph. The quantum state can be interpreted statistically. , 2012. arXiv:1201.6554v1.
O J E Maroney. How statistical is the quantum state? , 2012. arXiv:1207.6906v1
D A Meyer. Finite precision measurement nullifies the [K]{}ocken-[S]{}pecker theorem. , 83(19):3751–3754, 1999.
R W Spekkens. Evidence for the epistemic view of quantum states: A toy theory. , 75:032110, 2007.
[^1]: Ignoring sets of measure zero.
|
---
abstract: 'Smart contracts are autonomous software executing predefined conditions. Two of the biggest advantages of the smart contracts are secured protocols and transaction costs reduction. On the Ethereum platform, an open-source blockchain-based platform, smart contracts implement a distributed virtual machine on the distributed ledger. To avoid denial of service attacks and monetize the services, payment transactions are executed whenever code is being executed between contracts. It is thus natural to investigate if predictive analysis is capable to forecast these interactions. We have addressed this issue and propose an innovative application of the tensor decomposition CANDECOMP/PARAFAC to the temporal link prediction of smart contracts. We introduce a new approach leveraging stochastic processes for series predictions based on the tensor decomposition that can be used for smart contracts predictive analytics.'
author:
-
-
-
title: |
Modeling Smart Contracts Activities:\
A Tensor Based Approach
---
Tensors; CANDECOMP/PARAFAC Decomposition; Stochastic Processes Simulation
INTRODUCTION
============
TENSOR DECOMPOSITION
====================
In this section, we briefly describe mathematical operations involved in CP tensor decomposition before presenting the non-negative CP algorithm used for the analysis.
STOCHASTIC SERIES PREDICTION
============================
In this section, we present first separately log-normal and mean-reverting stochastic models and then, we propose our approach consisting in a log-normal-mean-reverting stochastic model used for series prediction on smart contracts activities.
EXPERIMENTS
===========
In this section, we describe the data used for the tensor decomposition and the simulation of smart contracts activities using our log-normal-mean-reverting model with the goal of speculative investment.\
All the experiments are performed on a PC with Intel Core i7 CPU and 8 GB of RAM. The algorithm for non-negative CP decomposition and stochastic processes has been implemented in Python language.
CONCLUSIONS
===========
We address in this paper the problem of time series prediction applied to CP tensor decomposition using a stochastic process on smart contracts. We obtain accurate probabilities prediction of Ether exchange for sender and receiver accounts that could be fitted to the risk profile of an investor or to an investment strategy. As a result, our approach can be used for the analysis of smart contract activities but also for someone who is willing to consider smart contracts as a financial investment.
However, some challenges will be addressed in future work. One challenge is to use stochastic parameters for the volatility of the time series process or for the correlation involved in the stochastic equations system. It would help to increase accuracy of the simulations, in particular for longer time horizon, and to reflect deeper series variation over time. In addition, the well-known CP decomposition has been performed but other decomposition could be used to enrich the interaction analysis of the smart contracts activities such as the DEDICOM decomposition.
[1]{}
Vincenzo Morabito, Smart Contracts and Licensing, Business Innovation Through Blockchain, Part II, 2017, pp. 101–124, doi:10.1007/978-3-319-48478-5$\_$6. Melanie Swan, Blockchain Temporality: Smart Contract Time Specifiability with Blocktime, Springer International Publishing Switzerland 2016, 2016, doi:10.1007/978-3-319-42019-6$\_$12. Florian Idelberger, Guido Governatori, Régis Riveret and Giovanni Sartor, Evaluation of Logic-Based Smart Contracts for Blockchain Systems, Springer International Publishing Switzerland 2016, 2016, doi:10.1007/978-3-319-42019-6$\_$11. Merit K$\tilde{\text{o}}$lvart, Margus Poola and Addi Rull, Smart Contracts, The Future of Law and eTechnologies, 2016, pp. 133-147, doi:10.1007/978-3-319-26896-5$\_$7. R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an explanatory multi-modal factor analysis, UCLA Working Papers in Phonetics, vol.16, 1970, pp. 1–84. Available at http://publish.uwo.ca/˜harshman/wpppfac0.pdf D. Carroll and J. J. Chang, Analysis of individual differences in multidimensional scaling via an N-way generalization of Eckart-Young decomposition, Psychometrika, vol.35, 1970, pp. 283–319. Tamara G. Kolda and Brett W. Bader, Tensor Decompositions and Applications, Society for Industrial and Applied Mathematics (SIAM) Review, vol. 62 no. 3, 2009, pp. 455-500. Tamara G. Kolda, Richard A. Harshman and Brett W. Bader, Temporal analysis of semantic graphs using ASALSAN, Sandia National Laboratories Technical Report, 2007, doi:10.1109/icdm.2007.54. Yang Mu, Wei Ding, Melissa Morabito and Dacheng Tao, Empirical Discriminative Tensor Analysis for Crime Forecasting, National Institute of Justice, 2009, no.2009-DE-BX-K219. Tamara G. Kolda, Daniel M. Dunlavy and Evrim Acar, Temporal Link Prediction using Matrix and Tensor Factorizations, Sandia National Laboratories, 2010. Jean Kossaifi, Yannis Panagakis and Maja Pantic, TensorLy: Tensor Learning in Python, 30th Conference on Neural Information Processing Systems (NIPS) Barcelona (Spain), 2016. Jean Kijung Shin, Lee Sael, and U Kang, Fully Scalable Methods for Distributed Tensor Factorization, IEEE Transactions On Knowledge And Data Engineering, 2017. D. Lee and H. Seung, Learning the parts of objects by nonnegative matrix factorization, Nature, 1999, pp. 788-791. Fischer Black and Myron Scholes, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 81, 1973, pp. 637–654, DOI:10.1086/260062. Robert Brown, A brief account of microscopical observations made in the months of June, July and August, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies, Phil. Mag., vol. 4, 1828, pp. 161-173. George E. Uhlenbeck and Leonard S. Ornstein, On the theory of Brownian Motion, Physics Review, vol. 36, 193, pp. 823-841, DOI:10.1103. Oldrich Vasicek, An equilibrium characterization of the term structure, Journal of Financial Economics, vol. 5, 1977, pp.177–188.
|
---
author:
- 'Andrea Surroca Ortiz (ETH Zurich)[^1]'
bibliography:
- 'mabiblio-mw-ts.bib'
title: 'On some conjectures on the Mordell-Weil and the Tate-Shafarevich groups of an abelian variety'
---
> **Abstract.** [We consider an abelian variety defined over a number field. We give conditonal bounds for the order of its Tate-Shafarevich group, as well as bounds for the Néron-Tate height of generators of its Mordell-Weil group. The bounds are implied by strong but nowadays classical conjectures, such as the Birch and Swinnerton-Dyer conjecture and the functional equation of the $L$-series. In particular, we generalise a result by D. Goldfeld and L. Szpiro on the order of the Tate-Shafarevich group. The method is an extension of the algorithm proposed by Yu. Manin for finding a basis for the non-torsion rational points of an elliptic curve defined over the rationals. ]{}
[2000 Mathematics Subject Classification: 11G10, 11G40, 14G05, 11G50.]{}
Introduction {#intro}
============
The Mordell-Weil theorem says that the group of rational points on an abelian variety $A/K$ defined over a number field is finitely generated: $A(K) \simeq A(K)_{tors} \times {\mathbb{Z}}^{{\mathrm{rk}}(A(K))}$. Nowadays, even in the case of an elliptic curve, there is no way, in general, to compute the torsion part, the rank or a set of generators of this group. For some applications, it would be sufficient to bound the size of them. There exist bounds for the cardinality of the torsion subgroup, e.g., in the case of elliptic curves, [@merel]. The proof of the Weak Mordell-Weil theorem gives an upper bound for the rank; such an explicit bound is obtained in [@ooe-top]. As for the generators, Yu. Manin [@manin] proposed an algorithm for finding a basis for the non-torsion rational points of an elliptic curve. From Manin’s algorithm one could deduce a bound for the Néron-Tate height of the generators. The approach relies on the conjecture of B. J. Birch and H. P. F. Swinnerton-Dyer [@birch-swinnerton-dyer] (BSD-conjecture for short) and on the hypothesis that the $L$-series of the elliptic curve satisfies a functional equation. S. Lang modified the heuristic approach of Yu. Manin and proposed the following conjecture [@lang.conj.dio Conjecture 3]: [*Let $E$ be an elliptic curve defined over ${\mathbb{Q}}$. Denote ${\mathcal{F}}_{E/{\mathbb{Q}}}$ the conductor, $H_{Falt}(E/{\mathbb{Q}})$ the exponential Faltings height and ${\mathrm{rk}}(E({\mathbb{Q}}))$ the Mordell-Weil rank of $E/{\mathbb{Q}}$. We can find a basis $\{P_{1}, \ldots, P_{r}\}$ of the free part of $E({\mathbb{Q}})$ satisfying*]{} $$\label{lang'sconj}
\max_{1\leq i \leq r}\hat{h}(P_i) \ll c^{{\mathrm{rk}}(E({\mathbb{Q}}))^{2}}\cdot {\mathcal{F}}_{E/{\mathbb{Q}}}^{\epsilon({\mathcal{F}}_{E/{\mathbb{Q}}})} \cdot (\log {\mathcal{F}}_{E/{\mathbb{Q}}})^{{\mathrm{rk}}(E({\mathbb{Q}}))} \cdot H_{Falt}(E/{\mathbb{Q}}),$$ [*where $c$ is an absolute constant and $\epsilon$ is a function which does not depend on the rank and $\epsilon({\mathcal{F}})$ tends to 0 as ${\mathcal{F}}$ tends to infinity.*]{}
On the other hand, from the proof of the Weak Mordell-Weil theorem we know that, for all $n \geq 1$, the $n$-torsion part of the Tate-Shafarevich group is finite. It is conjectured that the whole Tate-Shafarevich group is finite; the conjecture is known for certain elliptic curves with complex multiplication ([@rubin1987]) and certain modular elliptic curves ([@kolyvagin1989]). D. Goldfeld and L. Szpiro [@goldfeld-szpiro] suggested the following bound for the order of the Tate-Shafarevich group $\Sha(E/K)$ of an elliptic curve, in terms of the conductor. [*Let $E$ be an elliptic curve defined over a field $K$, which can be a number field or a function field. Then, for every $\epsilon >0$, $$\label{conj.g-sz.sha<cond}
|\Sha(E/K)| = O(N_{K/{\mathbb{Q}}}({\mathcal{F}}_{E/K})^{1/2 + \epsilon}),$$ where the implicit constant in the $O$ depends on $\epsilon$, $K$ and ${\mathrm{rk}}(E(K))$.*]{} In the same article they proved that this conjecture holds for elliptic curves defined over function fields provided the Tate-Shafarevich group of the function field is finite. D. Goldfeld and D. Lieman [@goldfeld-lieman] proved that, for a CM elliptic curve defined over ${\mathbb{Q}}$ with Mordell-Weil rank 0, we have $|\Sha(E/{\mathbb{Q}})| < k(\epsilon) {\mathcal{F}}_{E/{\mathbb{Q}}}^{\delta + \epsilon}$, with $\delta = \frac{59}{120}$ if $j \ne 0, 1728$, $\delta = \frac{37}{60}$ if $j = 0$ and $\delta = \frac{79}{120}$ if $j = 1728$, where $k(\epsilon)$ depends only on $\epsilon$ and is effectively computable. It is also proved in [@goldfeld-szpiro Theorem 1] that, if the curve $E$ is defined over ${\mathbb{Q}}$ and satisfies the BSD-conjecture and Szpiro’s conjecture (which bounds the discriminant in terms of the conductor), then $|\Sha(E/{\mathbb{Q}})| \ll {\mathcal{F}}_{E/{\mathbb{Q}}}^{7/4 + \epsilon({\mathcal{F}}_{E/{\mathbb{Q}}})}$, where $\epsilon({\mathcal{F}})$ tends to $0$ when ${\mathcal{F}}$ tends to infinity.
In this article, we consider an abelian variety $A$ defined over a number field $K$. We are interested in giving, under the assumption of the BSD-conjecture, bounds for its regulator and the order of its Tate-Shafarevich group, as well as bounds for the Néron-Tate height of the generators of its Mordell-Weil group. The bounds are given in terms of more tractable objects associated to the variety and the number field. Precisely, our bounds depends on the dimension $g$ of $A$, the absolute value ${\mathcal{F}}= N_{K/{\mathbb{Q}}}{\mathcal{F}}_{A/K}$ of the norm of the conductor, the Faltings’ height $h= h_{Falt}(A/K)$, the Mordell-Weil rank $r = {\mathrm{rk}}(A(K))$, the degree $[K:{\mathbb{Q}}]$ and the absolute value $D_K$ of the discriminant of $K$. Moreover the dependence is explicit in all the parameters, with the exception of $g$ and $[K:{\mathbb{Q}}]$. We denote by $H = \exp\{h \}$ the exponential height.
On one hand we give a conditional upper bound for the Néron-Tate height of the elements of a (particular) basis of the free part of the Mordell-Weil group $A(K)$.
\[langs-conj\] Let $A$ be an abelian variety defined over a number field $K$. Suppose that the $L$-series of $A/K$ satisfies a functional equation (Conjecture \[funct-eq\]) and the BSD-conjecture (Conjecture \[bsd\]). Then we can choose a system $\{P_{1}, \ldots, P_{r}\}$ of generators for the free part of the Mordell-Weil group $A(K)$ such that $\hat{h}(P_{1}) \leq \ldots \leq \hat{h}(P_{r})$ and $$\hat{h}(P_r) \leq
c_{[K:{\mathbb{Q}}],g}\cdot (r!)^4 \cdot 2^{r} \cdot (c_{[K:{\mathbb{Q}}]})^{1-r} \cdot D_{K}^{g}\cdot {\mathcal{F}}^{\frac{1}{4}}\cdot (\log {\mathcal{F}})^{4g[K:{\mathbb{Q}}]}\cdot (\log ({\mathcal{F}}\cdot D_{K}^{2g}))^{2g[K:{\mathbb{Q}}]} \cdot$$ $$\cdot H^{[K:{\mathbb{Q}}]}\cdot h^{(2g+1)(r-1) + g[K:{\mathbb{Q}}]},$$ where $c_{[K:{\mathbb{Q}}],g}$ depends at most on the dimension $g$ and the degree $[K:{\mathbb{Q}}]$, and $c_{[K:{\mathbb{Q}}]}$ depends at most on $[K:{\mathbb{Q}}]$.
Notice that Conjecture \[funct-eq\] is known for abelian varieties with complex multiplication ([@shimura-taniyama]) and some modular abelian varieties ([@shimura.automorphic.book]). As for Conjecture \[bsd\], the results of [@coates-wiles1977], [@gross-zagier1986], [@rubin1987] and [@kolyvagin1989] provide evidence for its truth. On the other hand, we extend Theorem 1 of [@goldfeld-szpiro] to arbitrary abelian varieties defined over number fields. This gives a conditional upper bound for $|\Sha(A/K)|$. When the dimension equals 1 and the number field is ${\mathbb{Q}}$, our bound improves Theorem 1 of [@goldfeld-szpiro].
\[BSD+Sz-Sha\] Let $A$ be an abelian variety defined over a number field $K$. Suppose that the $L$-series of $A/K$ satisfies a functional equation (Conjecture \[funct-eq\]) and the BSD-conjecture (Conjecture \[bsd\]). Suppose that $A/K$ satisfies the Szpiro’s Conjecture (Conjecture \[gral.szpiro\]). Then, for every $\epsilon >0$, $$|\Sha(A/K)| \leq \delta_{[K:{\mathbb{Q}}], g}(r) \cdot D_{K}^{g + (g^2 + \epsilon)[K:{\mathbb{Q}}]} \cdot
{\mathcal{F}}^{\frac{1}{4} + \left(\frac{g}{2} + \epsilon \right)[K:{\mathbb{Q}}]}\cdot (\log {\mathcal{F}})^{4g[K:{\mathbb{Q}}]}\cdot$$ $$\cdot (\log ({\mathcal{F}}\cdot D_{K}^{2g}))^{2g[K:{\mathbb{Q}}]} \cdot \left(\left(\frac{g}{2} + \epsilon \right)\log {\mathcal{F}}+ (g^{2} + \epsilon) \log D_{K} + c_{\epsilon, [K:{\mathbb{Q}}]}\right)^{r(2g+1) + g[K:{\mathbb{Q}}]},$$ where $\delta_{[K:{\mathbb{Q}}], g}(r) = c_{[K:{\mathbb{Q}}],g}\cdot (r!)^4 \cdot 2^{r} \cdot (c_{[K:{\mathbb{Q}}]})^r \cdot e^{[K:{\mathbb{Q}}]c_{\epsilon, [K:{\mathbb{Q}}]}}$, $c_{[K:{\mathbb{Q}}],g}$ depends only on $g$ and $[K:{\mathbb{Q}}]$, $c_{[K:{\mathbb{Q}}]}$ depends only on $[K:{\mathbb{Q}}]$ and $c_{\epsilon, [K:{\mathbb{Q}}]}$ depends at most on $\epsilon$ and $[K:{\mathbb{Q}}]$.
Recently, M. Hindry treated this topic for abelian varieties in [@hindry.mordell-weil]. He puts the accent on the analogy with the classical Brauer-Siegel formula for number fields. He formulates a conjecture comparing the product of the Tate-Shafarevich group and the regulator, with the height of the variety: [*For all $\epsilon >0,$ $H_{Falt}(A/K)^{(1-\epsilon)} \ll |\Sha(A/K)| \cdot {\mathrm{Reg}}(A/K) \ll H_{Falt}(A/K)^{(1+\epsilon)}$, where the implicit constants depend on $K, g, \epsilon$ and ${\mathrm{rk}}(A(K))$.* ]{} In [@manin], [@lang.conj.dio] and [@goldfeld-szpiro] the argument is developped when the dimension is 1 and the number field is ${\mathbb{Q}}$, while in [@hindry.mordell-weil] the dependence of the bounds on the number field is not always made explicit. As pointed out in our joint work with V. Bosser [@bosser-surroca], this dependence could play an important rôle. For example, the discriminant of the number field appears in the rank of the variety. In fact, the latter can be bounded in terms of the logarithm of the discriminant of $K$ ([@ooe-top]). Therefore, we consider here an arbitrary number field and make explicit the dependence on the number field. Furthermore, contrary to [@lang.conj.dio] and [@hindry.mordell-weil] the bounds given here are not conjectured, but implied, by strong but nowadays classical conjectures.
The method is an extension of the one proposed by Yu. Manin, based on the BSD-conjecture. The BSD-conjecture predicts the behavior of the $L$-series of the abelian variety $A$ at the center of symmetry, that is 1. In fact, it states that the order of vanishing of $L(A/K, s)$ at $s=1$ equals the Mordell-Weil rank of $A/K$. Furthermore, it gives a formula which relates the value of the leading coefficient of the Taylor expansion of $L(A/K, s)$ at $s=1$ to the product of the Tate-Shafarevich group, the canonical regulator and some other objects associated to the variety. The notations and the data concerning the abelian variety can be found in the next section. The core of our results are in section \[section-lemmes\], where we bound the product of the Tate-Shafarevich group and the canonical regulator (Proposition \[prop-sha.reg\]). In order to do it, we bound each one of the other terms of the BSD-formula. To deal with the leading coefficient of the Taylor expansion of the $L$-series we use the functional equation (Lemma \[coeff-dominant\]). We then relate the local periods to the Faltings’ height of $A/K$ (Lemma \[local-falt-ineg\]). We also give a bound for the torsion part of the Mordell-Weil group (Lemma \[lemma-torsion\]). In section \[section-geom.nbers\] we recall some classical results on the geometry of numbers. We quote, in section \[section-lower-non-torsion\], some lower bounds for the Néron-Tate height of non-torsion points. In section \[section-bounds\] we deduce from the BSD-conjecture, the bounds for the highest Néron-Tate height of a set of generators of $A(K)/A(K)_{tors}$ (Theorem \[langs-conj\]) and an upper bound for the order of $\Sha(A(K))$ (Theorem \[BSD+Sz-Sha\]).
In [@bosser-surroca], we apply these results, for an elliptic curve, to show that, using the elliptic analoque of Baker’s method, the BSD-conjecture for a single elliptic curve implies a result in the direction of the $abc$-conjecture over number fields.
Notations
=========
Throughout the text we will consider an abelian variety $A$ of dimension $g$ defined over a number field $K$. We denote $D_K$ the absolute value of the discriminant of the field $K$. To $A$ one can associate different objets, as the conductor, the $L$-function, the Faltings height, the Tate-Shafarevich group and the regulator. For the notations we follow [@serre.zeta], [@lockhart-rosen-silverman], [@milne] , [@manin], [@gross], [@tate.bourbaki.bsd] and [@cornell-silverman].
Let $v$ be a finite place of $K$ which corresponds to a prime ideal ${\mathfrak{p}}$. Denote $K_v$ or $K_{{\mathfrak{p}}}$ the completion of $K$ at $v$. For any prime ideal ${\mathfrak{p}}$ of $K$, fixing a prime in $K_{{\mathfrak{p}}}$ above ${\mathfrak{p}}$ gives us a decomposition group $G_{{\mathfrak{p}}} = {\mathrm{Gal}}(\overline{K_{{\mathfrak{p}}}}/K_{{\mathfrak{p}}})$ for ${\mathfrak{p}}$ in ${\mathrm{Gal}}({\overline{K}}/K)$. Let $I_{{\mathfrak{p}}}$ be the inertia subgroup of $G_{{\mathfrak{p}}}$, inducing the identity on the residue field $k({\mathfrak{p}})$. Let $\pi_{{\mathfrak{p}}}$ denotes the Frobenius which generates the quotient $G_{{\mathfrak{p}}}/I_{{\mathfrak{p}}}$. (Up to conjugation, $G_{{\mathfrak{p}}}, I_{{\mathfrak{p}}}$ and $\pi_{{\mathfrak{p}}}$ depend only on ${\mathfrak{p}}$.) Let $l$ be any prime, $l \ne char ( k({\mathfrak{p}}) )$. Denotes $A[N]$ the $N$-torsion of $A$, for an integer $N$, $T_l(A)= \varprojlim A[l^n]$ the $l$-adic Tate module, and $V_l(A/K)= T_l(A)\otimes_{{\mathbb{Z}}_l}{\mathbb{Q}}_l$ the ${\mathbb{Q}}_l$-vector space associated. Since ${\mathrm{Gal}}({\overline{K}}/K)$ acts on $V_l(A/K)$, we have a $l$-adic representation $\rho : {\mathrm{Gal}}({\overline{K}}/K) \rightarrow GL(V_{l}(A/K))$.
The [*conductor*]{} of the abelian variety $A/K$ is the integral ideal of $K$ defined by $${\mathcal{F}}_{A/K} = \prod {\mathfrak{p}}^{f_{{\mathfrak{p}}}},$$ where the product runs over the prime ideals ${\mathfrak{p}}$ of $K$, and $f_{{\mathfrak{p}}}$ is a positive integer, called the [*exponent of the conductor*]{}, which we will define below. However, the exponent $f_{{\mathfrak{p}}}$ is zero if and only if $A$ has good reduction at ${\mathfrak{p}}$. So, this product is finite. Let $p$ be the prime number lying below ${\mathfrak{p}}$. It is known that if $p> 2g+1$, then $f_{{\mathfrak{p}}} \leq 2g$. Unconditionally, it is proven in [@lockhart-rosen-silverman] that $f_{{\mathfrak{p}}} \leq 12 g^2 v_{K_{{\mathfrak{p}}}} (p)$ (see [@brumer-kramer1994] for best possible upper bounds in all cases).
As in [@serre.zeta], we will attache to $\rho$ two positive integers $\varepsilon_{{\mathfrak{p}}}(l)$ and $\delta_{{\mathfrak{p}}}(l)$ which measures the ramification of $\rho$. For the notations we follow [@lockhart-rosen-silverman]. Denotes $V_{l}(A/K)^{I_{{\mathfrak{p}}}}$the submodule of elements fixed by $I_{{\mathfrak{p}}}$. Define $$\varepsilon_{{\mathfrak{p}}}(l)= {\mathrm{codim}}_{{\mathbb{Q}}_l}V_{l}(A/K)^{I_{{\mathfrak{p}}}}.$$ Let $L_{{\mathfrak{p}}}= K_{{\mathfrak{p}}}(A[l])$ be the field generated over $K_{{\mathfrak{p}}}$ by the $l$-torsion points of $A$. Denotes $v_{L_{{\mathfrak{p}}}}$ the normalized valuation on $L_{{\mathfrak{p}}}$. Let $\pi_{L_{{\mathfrak{p}}}}$ be a uniformizer for $L_{{\mathfrak{p}}}$. Denotes $G_i= \{\sigma \in {\mathrm{Gal}}(L_{{\mathfrak{p}}}/K_{{\mathfrak{p}}}); v_{L_{{\mathfrak{p}}}}(\sigma \pi_{L_{{\mathfrak{p}}}} - \pi_{L_{{\mathfrak{p}}}}) \geq i+1 \}$ the $i$-th inertia group associated to $L_{{\mathfrak{p}}}/K_{{\mathfrak{p}}}$ and $g_i = |G_i|$ its order. Write $g_0 = |{\mathrm{Gal}}(L_{{\mathfrak{p}}}/K_{{\mathfrak{p}}})|$. Define $$\delta_{{\mathfrak{p}}}(l)= \sum_{i\geq 1} \frac{g_i}{g_0} \dim_{\mathbf{F}_l}\left(\frac{A[l]}{A[l]^{G_i}}\right).$$ It has being proved (see the references in [@lockhart-rosen-silverman]) that $\varepsilon_{{\mathfrak{p}}}(l)$ and $\delta_{{\mathfrak{p}}}(l)$ are independents of $l$ so we will denote them by $\varepsilon_{{\mathfrak{p}}} $ and $\delta_{{\mathfrak{p}}}$. They are called the [*tame*]{} part and, respectively the [*wild*]{} part of the conductor. The exponent of the conductor is given by $$f_{{\mathfrak{p}}}= \varepsilon_{{\mathfrak{p}}} + \delta_{{\mathfrak{p}}}.$$
Let us define the $L$[*-series*]{}, also called the $\zeta$[*-function*]{}, of the variety $A$ (see [@serre.zeta Section 4]). Since the Frobenius is defined up to $I_{{\mathfrak{p}}}$, it makes sense to define a polynomial $P_{A, {\mathfrak{p}}} (T) = \det (1 - (\rho (\pi_{{\mathfrak{p}}}) | V_l(A/K)^{I_{{\mathfrak{p}}}})T)$, where $\pi_{{\mathfrak{p}}}$ is regarded as acting on the submodule $V_{l}(A/K)^{I_{{\mathfrak{p}}}}$ of elements fixed by $I_{{\mathfrak{p}}}$. The polynomial $P_{A, {\mathfrak{p}}} (T)$ has integral coefficients which are independent of $l$ ([@serre-tate Theorem 3]). Define $$L(A/K,s) = \prod _{v_{{\mathfrak{p}}}} P_{A,{\mathfrak{p}}}(N(v_{{\mathfrak{p}}})^{-s})^{-1}$$ where the product is taken over all non-archimedean places $v_{{\mathfrak{p}}}$ of $K$ and $N(v_{{\mathfrak{p}}})$ is the norm of the prime ideal ${\mathfrak{p}}$ associated to $v_{{\mathfrak{p}}}$. Define the [*normalized*]{} $L$[*-function*]{} by $$\Lambda(A/K,s) = (N_{K/{\mathbb{Q}}}( \mathcal{F}_{A/K}) \cdot D_{K}^{2g})^{s/2} \cdot ((2\pi)^{-s} \cdot \Gamma (s))^{g[K:{\mathbb{Q}}]} \cdot L(A/K, s).$$ (Observe that the product $\prod_{v \in M_K^{\infty}}\Gamma_v(s)$ of the $\Gamma$-factors equals $((2\pi)^{-s} \cdot \Gamma (s))^{g[K:{\mathbb{Q}}]}$. In fact, $\Gamma_v(s)$ equals $\Gamma_{{\mathbb{C}}}(s)^{2g}$ if $v$ is complex and, $\Gamma_{{\mathbb{C}}}(s)^{g}$ if $v$ is real, where $\Gamma_{{\mathbb{C}}}(s) = (2\pi)^{-s}\Gamma(s)$ (see [@serre.zeta section 3]).) The Euler product converges and gives an analytic function for all $s$ satisfying $\Re(s) > \frac{3}{2}$. We have a classical generalisation of a conjecture of Hasse-Weil.
\[Hasse-Weil\] \[funct-eq\] Let $A/K$ be an abelian variety defined over a number field. The $L$-series and the $\Lambda$-series of $A/K$ have an analytic continuation to the entire complex plane and the $\Lambda$-series satisfies the functional equation $$\Lambda(A/K, 2-s) = \varepsilon \Lambda (A/K, s), {\hspace{0,2cm}}\textrm{for some }{\hspace{0,2cm}}\varepsilon = \pm 1.$$
This conjecture is true for abelian varieties with complex multiplication ([@shimura-taniyama]), in some special cases, this conjecture is also true for modular abelian varieties ([@shimura.automorphic.book]) and it is true for elliptic curves over ${\mathbb{Q}}$ ([@wiles1995] and [@breuil-conrad-diamond-taylor]).
Denote ${\Omega_{A/K}^{1}}$ the sheaf of differentials 1-forms on $A/K$ and let $\{\omega_{1}, \ldots, \omega_{g}\}$ be a $K$-basis of $H^{0}(A, {\Omega_{A/K}^{1}})$. Let $\eta= \omega_{1}\wedge \ldots \wedge \omega_{g}$ be a non zero differential $g$-form on $A$. Let ${\mathcal{A}}$ denote the Néron model of $A$ over ${\mathcal{O}_{K}}$, let $e : {\mathrm{Spec}({\mathcal{O}_{K}})}\rightarrow {\mathcal{A}}$ be its neutral section and let ${\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}$ be the invertible sheaf of the differential 1-forms on ${\mathcal{A}}$. The module $H^{0}({\mathcal{A}}, {\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}})$ of global invariant differentials on ${\mathcal{A}}$ is a projective ${\mathcal{O}_{K}}$-module of rank 1 and can be written as $$H^{0}({\mathcal{A}}, {\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}) = \eta {\mathfrak{a}},$$ where ${\mathfrak{a}}$ is a fractional ideal of $K$ (depending on $\eta$). To every place $v$ of $K$, we will associate a local number $c_v$. For a [*finite*]{} place $v$ let $A^0(K_v)$ be the subgroup of $K_v$-rational points which reduces to the identity component of the Néron model ${\mathcal{A}}$. Denotes $$c_v= (A(K_v):A^0(K_v))$$ the index of the subgroup of $K_{v}$-rational points which extends to the connected component in ${\mathcal{A}}$. Let $\mu_v$ be an additive Haar measure on $K_v$ such that $\mu_v(O_{K_v}) = 1$ if $v$ is finite, $\mu_v$ is the Lebesgue measure if $v$ is a real archimedean place (i.e. $K_{v} = {\mathbb{R}}$) and twice the Lebesgue measure if $v$ is complex (i.e. $K_{v} = {\mathbb{C}}$). Define, for an [*archimedean*]{} place $v$, the [*local period*]{} $$c_v = \int_{A(K_{v})}|\eta| \mu_v^g.$$ Remark that the integral $c_{v}$ is non zero. For a non-archimedean place, Yu. Manin define $m_v= P_v(N_{K/{\mathbb{Q}}}({\mathfrak{p}}_v)^{-1})^{-1} \int_{A(K_{v})} |\eta| \mu_v^g$, which is equivalent to our $c_v$ (see, e. g., [@manin lemma 8.9] when the dimension $g$ is 1). For the complex places, Yu. Manin choose $\mu_v$ to be the Lebesgue measure, instead, as we do, twice the Lebesgue measure. So with its definition [@manin formula (38)], $c_v = 2^g m_v$. B. Gross [@gross] gives another formulation for the archimedean places, which is equivalent (see [@manin lemma 8.8]). Define also the [*archimedean local factor*]{} as $$c_{\infty}(A/K) = N_{K/{\mathbb{Q}}}({\mathfrak{a}}) \cdot \prod_{v \in M_K^{\infty}}c_v,$$ which is independent of the choice of the differential $\eta$.
The part concerning the local periods $c_v$ can be bounded in terms of the [*Faltings’ height*]{}. We recall here its definition (see [@cornell-silverman Chapter II]). We endowed the line bundle ${\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}$ with an hermitian metric by definying, for a section $s$ and for every archimedean place $v$, $$|s|_{v} = \left( \left(\frac{i}{2}\right)^{g} \int _{A(\overline{K_{v}})} s \wedge \overline{s} \right)^{\frac{1}{2}}.$$ We define also $$|| s ||_{v} = |s|_{v}^{n_{v}},$$ where $n_{v} = 1$ if $v$ is real and $n_{v} = 2$ if $v$ is complex. This norm extends the norme on $K_{v}$ (i.e. $\forall k \in K_{v}, {\hspace{0,2cm}}\forall s \in {\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}\otimes_{{\mathcal{O}_{K}}} K_{v}, {\hspace{0,2cm}}||ks ||_{v} = ||k||_{v} \cdot ||s||_{v}$). Taking the pull-back of ${\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}$ and metrics via the neutral section $e : {\mathrm{Spec}({\mathcal{O}_{K}})}\rightarrow {\mathcal{A}}$, we obtain a metrised line bundle on ${\mathrm{Spec}({\mathcal{O}_{K}})}$ (i.e. a projective ${\mathcal{O}_{K}}$-module of rank 1): $${\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}= e^{*}{\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}.$$ The line bundle ${\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}$ can be identified with $H^{0}({\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}) = \eta {\mathfrak{a}}$. In fact, $e^{*}{\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}= \pi_{*} {\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}$, where $\pi : {\mathcal{A}}\rightarrow {\mathrm{Spec}({\mathcal{O}_{K}})}$ is the structural morphism, and since the line bundle is affine, it can be identified with the module of its global sections $H^{0}({\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}})$. The Faltings’ height of $A$ is the [*Arakelov degree*]{} of $\omega_{{\mathcal{A}}/{\mathcal{O}_{K}}}$ : $$h_{\mathrm{Falt}}(A/K) = \frac{1}{[K:{\mathbb{Q}}]} \deg_{\mathrm{Ar}}({\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}, ||.||) = - \frac{1}{[K:{\mathbb{Q}}]} \log \prod_{v \in M_{K}} ||s||_{v},$$ for any section $s$. We will also use the notation $H_{Falt} = \exp\{h_{Falt} \}$. It is well known that $$\deg_{\mathrm{Ar}}({\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}, ||.||) = \log {\mathrm{card}}({\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}/s{\mathcal{O}_{K}}) - \sum_{v | \infty} \log ||s||_{v}.$$
Denote $\Sha (A/K) = \ker (H^1({\mathrm{Gal}}({\overline{K}}/K), A_K) \rightarrow \prod_v H^1({\mathrm{Gal}}({\overline{K}}_v/K_v), A_{K_v}))$ the [*Tate-Shafarevich group*]{} of $A/K$. Recall that $\Sha (A/K)$ measures the obstruction to the Hasse principle. In fact, a non trivial element of $\Sha (A/K)$ corresponds to an homogenous space which have $K_{v}$-rational points for every place $v$, but no $K$-rational points. Even if it is not easy to construct such a variety, it is only conjectured that $\Sha (A/K)$ is finite. K. Rubin [@rubin1987] gave the first examples of elliptic curves for which it can be proved that the Tate-Shafarevich group is finite (for example, for elliptic curves defined over ${\mathbb{Q}}$). See also the results of V. A. Kolyvagin [@kolyvagin1989]. We will suppose throughout the text that $\Sha (A/K)$ is finite.
Denote ${\check{A}}$ the dual abelian variety of $A$, that is ${\mathrm{Pic}}^{0}(A)$, which is also defined over $K$, and isogenous to $A$. Let $<,> : A(K) \times {\check{A}}(K) \rightarrow {\mathbb{R}}$ denote the Néron-Tate height pairing corresponding to the [*Poincaré*]{} divisor on $A\times {\check{A}}$. Denote $r = {\mathrm{rk}}(A(K))$ the rank of $A(K)$. Choose a basis $\{P_1, \ldots, P_r\}$ for the torsion free part of $A(K)$ and a basis $\{Q_1, \ldots, Q_r\}$ for the torsion free part of ${\check{A}}(K)$. The [*canonical regulator*]{} of $A$ is defined by $${\mathrm{Reg}}(A) = \det((<P_{i}, Q_{j}>)_{1\leq i\leq r;
1\leq j\leq r}).$$ It is a non zero real number and does not depend on the choice of the basis.
Denote $A(K)_{tors}$ and ${\check{A}}(K)_{tors}$ the torsion subgroups of the Mordell-Weil groups. Remark that $|A(K)_{tors}|$ and $|{\check{A}}(K)_{tors}|$ are always non zero.
On the Birch and Swinnerton-Dyer conjecture {#section-lemmes}
===========================================
We can now state the celebrated conjecture of B. J. Birch and H. P. F. Swinnerton-Dyer (see [@birch-swinnerton-dyer] for the case of elliptic curves and [@manin] and [@gross] for a general formulation).
\[bsd\] Let $A$ be an abelian variety defined over a number field $K$.
1. The $L$-series $L(A/K,s)$ has an analytic continuation to the entire complex plane.
2. ${\mathrm{ord}}_{s=1}L(A/K,s) = {\mathrm{rk}}(A(K))$.
3. The leading coefficient $L^{\star}(A/K, 1) = \lim_{s\to1}\frac{L(A/K,s)}{(s-1)^{{\mathrm{rk}}(A(K))}}$ in the Taylor expansion of $L(A/K,s)$ at $s=1$ satisfies $$\label{formula-bsd}
L^{\star}(A/K, 1) = | \Sha (A/K)| \cdot {\mathrm{Reg}}(A(K)) \cdot |A(K)_{tors}|^{-1} \cdot |{\check{A}}(K)_{tors}|^{-1} \cdot c_{\infty}(A/K) \cdot \prod _{v \in M_K^0}c_v \cdot D_K^{-g/2}.$$
There is some evidence for the truth of this conjecture. In particular, for an elliptic curve defined over ${\mathbb{Q}}$ satisfying ${\mathrm{ord}}_{s=1}L(E/{\mathbb{Q}},s) = 0$, the conditions 1. and 2. are proved and also a relation between the value of $L(E/{\mathbb{Q}},1)$ and the order of $\Sha(E/{\mathbb{Q}})$ similar to condition 3. See the results of [@coates-wiles1977], [@gross-zagier1986], [@rubin1987], [@kolyvagin1989].
In this section we bound the product $|\Sha(A/K)| \cdot {\mathrm{Reg}}(A/K)$. In order to do it, the formula (\[formula-bsd\]) of the BSD-conjecture suggest to bound each one of the remaining terms. This is done in the following lemmas.
The numbers $c_v$, for every non-archimedean place $v$, are non zero integers and can be bounded from below by 1.
To bound the leading coefficient $L^{*}(A/K, 1)$, Yu. Manin suggested to use the functional equation and some explicit equation for the derivative [@manin formula (48)]. Here we use the functional equation and the classical convexity argument as in the Phragmén-Lindelöf principle, as in [@goldfeld-szpiro]. We conclude applying the Cauchy inequality in two different ways, because we want to obtain different kind of bounds.
\[coeff-dominant\]
Let $A/K$ be an abelian variety of dimension $g$ satisfying Conjecture \[funct-eq\]. Let $r= {\mathrm{ord}}_{s=1}L(A/K,s)$ and $\mathcal{F} = N_{K/{\mathbb{Q}}}(\mathcal{F}_{A/K})$. Then the leading coefficient of the $L$-series of $A/K$ at $s=1$ satisfies the following bounds: $$\label{bound-coeff-rank}
|L^{\star}(A/K, 1)| \leq \left(9/2\pi\right)^{g[K:{\mathbb{Q}}]}\sqrt{\mathcal{F}} \cdot D_{K}^{g}$$ $$\label{bound-coeff-cond}
|L^{\star}(A/K, 1)| \leq 2^{r} \cdot 4^{g[K:{\mathbb{Q}}]}\cdot {\mathcal{F}}^{\frac{1}{4}}\cdot D_{K}^{\frac{g}{2}}\cdot (\log ({\mathcal{F}}\cdot D_{K}^{2g}))^{2g[K:{\mathbb{Q}}]}.$$
\[remark-rank\] [*[Conjecturally, the order of the $L$-series at 1, here denoted by $r$, equals the rank of $A(K)$. The bound (\[bound-coeff-rank\]) is independent of $r$, while (\[bound-coeff-cond\]) has a term $2^r$. Concerning the rank, T. Ooe and J. Top [@ooe-top] proved the following bound: $$\label{bound.ooe-top}
{\mathrm{rk}}(A(K)) \leq \gamma_1 \log {\mathcal{F}}+ \gamma_2 \log D_K + \gamma_3,$$ where $\gamma_1, \gamma_2$ and $\gamma_3$ are positive real numbers depending only on $g$ and $[K:{\mathbb{Q}}]$ and are explicitly given. Using (\[bound.ooe-top\]), we deduce from (\[bound-coeff-cond\]) a bound independent of the rank, which growth in ${\mathcal{F}}$ and $D_K$ is as: $${\mathcal{F}}^{\frac{5}{4}}\cdot D_K^{\frac{g}{2} +1}\cdot (\log ({\mathcal{F}}\cdot D_K^{2g}))^{2g[K:{\mathbb{Q}}]}.$$ Respect to the conductor, the bound (\[bound-coeff-rank\]) is of better quality than this last one. As for the dependence on the discriminant $D_K$, the bound (\[bound-coeff-rank\]) has a better dependence than this last bound if the dimension $g$ is 1 or 2. In [@bosser-surroca] we are concerned with elliptic curves and we are interested on the dependence on $D_K$: the bound (\[bound-coeff-rank\]) is used therein. (However, it is expected that ${\mathrm{rk}}(A(K)) \ll \frac{\log {\mathcal{F}}}{\log \log {\mathcal{F}}}$. This would gives, using (\[bound-coeff-cond\]), a bound for the leading coefficient of the order $${\mathcal{F}}^{\frac{1}{4} + \epsilon({\mathcal{F}})}\cdot D_K^{\frac{g}{2} + 1 + \epsilon'(D_K)},$$ where $\epsilon$ and $\epsilon'$ depend on $g$ and $[K:{\mathbb{Q}}]$ and tend to $0$ when ${\mathcal{F}}$ tends to infinity and, respectively, when $D_K$ tends to infinity.) From now on, we will use the bound (\[bound-coeff-rank\]) in the one-dimensional case and the bound (\[bound-coeff-cond\]) otherwise. ]{}*]{}
[*Proof of Lemma \[coeff-dominant\].*]{} Let us consider the abelian variety $A' = {\mathrm{Res}}^{K}_{{\mathbb{Q}}} A$ over ${\mathbb{Q}}$ which is obtained from $A$ by restriction of scalars (see [@milne]). Over ${\mathbb{C}}$ we then have the decomposition $A' \simeq \prod_{\sigma} A_{\sigma}({\mathbb{C}})$, where the product runs over all the embeddings $\sigma : K \hookrightarrow {\mathbb{C}}$ and $A_{\sigma}$ is the abelian variety obtained by action of $\sigma$ on $A$. Then $A'$ is of dimension $g' = g [K:{\mathbb{Q}}]$ and $$\label{A'-A}
L(A'/{\mathbb{Q}}, s) = L(A/K,s) \textrm{ and } {\mathcal{F}}_{A'/{\mathbb{Q}}} = N_{K/{\mathbb{Q}}}({\mathcal{F}}_{A/K})\cdot D_{K}^{2g}.$$
We will use the fact that, since the abelian variety $A'$ is defined over ${\mathbb{Q}}$, we have [@fontaine.pas.var.ab.Z] $$\label{N>10^{g}}
{\mathcal{N}}= {\mathcal{F}}_{A'/{\mathbb{Q}}} > 10^{g'}.$$
The Hasse-Weil bound gives $P_{A', p}(T) = \prod_{i= 1}^{\rho}(1-\alpha_{i}T)$, where $\rho = \deg (P_{A', p}) \leq 2g'$ and $|\alpha_{i}| \leq \sqrt{p}$. Then, if we write $s = \sigma + i \tau$, with $\sigma = \Re(s) > \frac{3}{2}$, the local factor of the Eulerian product of the $L$-series satisfies $$|P_{A',p}(p)^{-s}|^{-1} \leq (1-p^{\frac{1}{2} - \sigma})^{-2g'},$$ and then $$|L(A'/{\mathbb{Q}}, s)| \leq \zeta(\sigma-\frac{1}{2})^{2g'}.$$ Let $\sigma = \frac{3}{2} + \epsilon$, with $\epsilon > 0$. Then $$|\Lambda(A'/{\mathbb{Q}}, s)| = |\Lambda(A'/{\mathbb{Q}}, \frac{3}{2} + \epsilon + i \tau)| \leq {\mathcal{N}}^{\frac{3}{4} + \frac{\epsilon}{2}}\cdot (2 \pi)^{-g'(\frac{3}{2} + \epsilon)} \cdot \Gamma(\frac{3}{2} + \epsilon)^{g'} \cdot |\zeta(1+\epsilon)|^{2g'}.$$ Since $|\zeta(1+\epsilon)| \leq (1+\frac{1}{\epsilon})$, for $\epsilon >0$, then $$\label{borne-lambda}
|\Lambda(A'/{\mathbb{Q}}, \frac{3}{2} + \epsilon + i \tau)| \leq {\mathcal{N}}^{\frac{3}{4} + \frac{\epsilon}{2}}\cdot (2 \pi)^{-g'(\frac{3}{2} + \epsilon)} \cdot \Gamma(\frac{3}{2} + \epsilon)^{g'} \cdot (1+\frac{1}{\epsilon})^{2g'}.$$
Using the functional equation, that is, Conjecture \[funct-eq\], the same bound (\[borne-lambda\]) is valid for $|\Lambda(A'/{\mathbb{Q}}, \frac{1}{2} - \epsilon - i \tau)|$. The Phragmén-Lindelöf theorem, implies that the bound (\[borne-lambda\]) is still valid for $s$ with real part $\sigma$ satisfying $\frac{1}{2}- \epsilon \leq \sigma \leq \frac{3}{2} + \epsilon$. Applying the Cauchy inequality in the disc $\mathcal{D}(1, \frac{1}{2} + \epsilon)$, we obtain $$L^{*}(A'/{\mathbb{Q}}, 1) = \frac{(2\pi)^{g'}}{\sqrt{{\mathcal{N}}}} \frac{\Lambda^{(r)}(A'/{\mathbb{Q}}, 1)}{r!}
\leq \frac{(2\pi)^{g'}}{\sqrt{{\mathcal{N}}}} \frac{1}{(\frac{1}{2} + \epsilon)^{r}} \max_{s \in \mathcal{D}(1, \frac{1}{2} + \epsilon)} \Lambda(A'/{\mathbb{Q}}, s).$$ The upper bound (\[borne-lambda\]) gives $$L^{*}(A'/{\mathbb{Q}}, 1) \leq \frac{1}{(\frac{1}{2} + \epsilon)^{r}} \cdot (2\pi)^{-g'\left(\frac{1}{2}+\epsilon \right)} \cdot {\mathcal{N}}^{\frac{1}{4}+ \frac{\epsilon}{2}} \cdot \Gamma\left(\frac{3}{2} +\epsilon \right)^{g'} \cdot \left(1+\frac{1}{\epsilon}\right)^{2g'}.$$ To prove (\[bound-coeff-rank\]), we choose $\epsilon = \frac{1}{2}$ and obtain $$L^{*}(A'/{\mathbb{Q}}, 1) \leq \left(\frac{9}{2\pi}\right)^{g'}\cdot \sqrt{{\mathcal{F}}_{A'/{\mathbb{Q}}}}.$$ To prove (\[bound-coeff-cond\]), we take $\epsilon = \frac{2}{\log {\mathcal{N}}}$. Thus $(\frac{1}{2} + \epsilon)^{-r} \leq 2^{r}$ and ${\mathcal{N}}^{\frac{1}{4}+ \frac{\epsilon}{2}} = e\cdot {\mathcal{N}}^{\frac{1}{4}}$. Using the lower bound (\[N>10\^[g]{}\]) for the conductor ${\mathcal{N}}$ of $A'/{\mathbb{Q}}$, we obtain $1+\frac{1}{\epsilon} \leq \log {\mathcal{N}}$ and $ \Gamma\left(\frac{3}{2} +\epsilon \right) \leq \left(\frac{3}{2}\sqrt{\pi}\right) < 3$. For proving the last inequality, remark that $\Gamma\left(\frac{3}{2} + \epsilon\right) = \left(\frac{1}{2} + \epsilon\right) \Gamma\left(\frac{1}{2} + \epsilon \right) \leq \frac{3}{2} \sqrt{\pi}$, because $\frac{1}{2} + \epsilon \in [\frac{1}{2}, \frac{3}{2}]$ and then $\Gamma \left(\frac{1}{2} + \epsilon \right) \leq \Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}$. Moreover $e\cdot 3^{g'}\cdot (2\pi)^{-g'(1/2 +\epsilon)} = e \cdot \left(\frac{3}{(2\pi)^{1/2 + \epsilon}}\right)^{g'} \leq e \cdot (\frac{6}{5})^{g'} \leq 4^{g'}$, because $\frac{1}{2} < \frac{1}{2} + \epsilon < \frac{3}{2}$. This gives $$L^{*}(A'/{\mathbb{Q}}, 1) \leq 2^{r} \cdot 4^{g'}\cdot {\mathcal{F}}_{A'/{\mathbb{Q}}}^{\frac{1}{4}}\cdot (\log {\mathcal{F}}_{A'/{\mathbb{Q}}})^{2g'}.$$ We conclude in both cases applying (\[A’-A\]). $\Box$
In order to relate the Faltings’ height with the archimedean local periods, we need some preliminaries. For $v$ complex, the local period $c_{v}$ is almost the norm $||\omega||_{v}$ of $\omega$, while for $v$ real, it is a little bit more delicate to link the local period $c_{v}$ with the norm $||\omega||_{v}$. We fix an archimedean place $v$ of $K$. Let $(\gamma_{1,v}, \ldots, \gamma_{2g,v})$ be a basis of the integral homology $H = H_1(A(\overline{K_{v}}), {\mathbb{Z}})$ of $A$. Choose $\gamma_{1,v}, \ldots, \gamma_{g,v}$ such that $\gamma_{1,v}, \ldots, \gamma_{g,v}$ generates the part of $H$ fixed by complex conjugation. Let $$\Omega_{1,v} = \left(\int_{\gamma_{i,v}} \omega_{j}\right)_{1\leq i \leq g} {\hspace{0,2cm}}\textrm{and} {\hspace{0,2cm}}\Omega_{2,v} = \left(\int_{\gamma_{i,v}} \omega_{j} \right)_{g+1\leq i \leq 2g},$$ where $j$ runs over $\{1, \ldots, g\}$, be the periods matrixes associated to $\gamma_{1,v}, \ldots,\gamma_{2g,v}$. Moreover, choose $\gamma_{1,v}, \ldots\gamma_{2g,v}$ such that $$\tau_{v} = \Omega_{1,v}^{-1}\Omega_{2,v}$$ is a symmetric matrix in a fundamental domain. Then $\Im(\tau_{v})$ is positive definite, and satisfies $$\label{imto}
\Im(\tau_{v 1,1}) \leq \ldots \leq \Im(\tau_{v g,g}), {\hspace{0,2cm}}\Im(\tau_{v i,i}) \geq \frac{\sqrt{3}}{2}
\textrm{ and } |\Im(\tau_{v i,j})| \leq \frac{1}{2} \Im(\tau_{v i,i}).$$ Let $\Lambda_{v} = \Omega_{1,v} {\mathbb{Z}}^{g}+ \Omega_{2,v} {\mathbb{Z}}^{g}$ be the associated lattice. Choose an isomorphism over ${\mathbb{C}}$ $$\begin{array}{rcl}
\phi : {\mathbb{C}}^{g}/\Lambda_{v} & \rightarrow & A(\overline{K_{v}}) \end{array}$$ such that the inverse function of $\phi$ maps the invariant differential $\eta$ to $dz$: $\phi^{*}(\eta) = dz$.
\[local-falt\] The archimedean local factor satisfies $$c_{\infty}(A/K) = \prod_{v {\textrm{real}}}\frac{2^{\epsilon_v}}{\sqrt{\det \Im(\tau_{v})}} \cdot H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]},$$ where $2^{\epsilon_v} = {\mathrm{card}}(A_{v}({\mathbb{R}}) : A_{v}({\mathbb{R}})^{0})$ is the number of real components of the variety $A_{v}$ obtained from $A$ by the action of $v$.
Before proving Lemma \[local-falt\], we deduce an upper bound and a lower bound for $c_{\infty}(A/K)$ in terms of the Faltings’ height, the degree $[K:{\mathbb{Q}}]$ and the dimension $g$.
\[local-falt-ineg\] The archimedean local factor satisfies the following inequalities $$\label{matrix}
c_{[K:{\mathbb{Q}}], g} \cdot h_{Falt}(A/K)^{-g[K:{\mathbb{Q}}]}\cdot H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]} \leq c_{\infty}(A/K) \leq 2^{[K:{\mathbb{Q}}]} H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]},$$ where $c_{[K:{\mathbb{Q}}], g}$ depends at most on the degree $[K:{\mathbb{Q}}]$ and the dimension $g$. When $g=1$, one may take $c_{[K:{\mathbb{Q}}], 1} = (3 [K:{\mathbb{Q}}]^2)^{-[K:{\mathbb{Q}}]}$.
[*Proof.*]{} Since $\epsilon_v$ equals 0 or 1, the product of the number of real components satisfies: $1\leq \prod2^{\epsilon_v}\leq 2^{[K:{\mathbb{Q}}]}$. On the other hand, $\Im(\tau)$ satisfies (\[imto\]). Then $$\prod_{v \in M_{K}^{\infty}{\textrm{real}}}\frac{1}{\sqrt{\det \Im(\tau_{v})}}\cdot H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]} \leq c_{\infty}(A/K) \leq 2^{[K:{\mathbb{Q}}]} H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]}.$$ For $g=2$, using the Matrix Lemma of [@masser.LNM1290 p. 126] (see also [@sinnou-patrice.helvet Lemma 6.7]), we obtain $$\label{matrix2}
c_{[K:{\mathbb{Q}}], g} \cdot h_{Falt}(A/K)^{-g[K:{\mathbb{Q}}]}\cdot H_{Falt}(A/K)^{-[K:{\mathbb{Q}}]} \leq c_{\infty}(A/K).$$
For $g=1$, using the arithmetic-geometric inequality and the Cauchy-Schwartz lemma, we obtain $$\prod_{v|\infty} \sqrt{\Im (\tau_v)} \leq \frac{1}{\sharp \{v|\infty \}} \left(\sum_{v|\infty}\sqrt{\Im (\tau_v)} \right)^{\sharp\{v|\infty \}} \leq
\frac{1}{\sharp\{v|\infty \}} \left(\sharp \{v|\infty \} \sum_{v|\infty} {\Im (\tau_v)} \right)^{\sharp\{v|\infty \}}$$ $$\label{prodimto}
\leq [K:{\mathbb{Q}}] ^{[K:{\mathbb{Q}}]} \left(\sum_{v|\infty} {\Im (\tau_v)} \right)^{[K:{\mathbb{Q}}]}.$$ We now use the formula for the height $h = h_{Falt}(E/K)$ of [@cornell-silverman Prop. 1.1 of Chap. X]: $$12[K:{\mathbb{Q}}] h = \log N_{K/{\mathbb{Q}}} \Delta_{E/K} - \sum_{v | \infty} 6 n_v \log \Im (\tau_v) - \sum_{v|\infty} n_v \log |\Delta(\tau_v)|$$ and, from the exercise on page 256 of [*loc. cit.*]{}, the estimate $$\log |\Delta(\tau_v)| \leq -2 \pi \Im (\tau_v) + \log \frac{e^{1/9}}{(2\pi)^{12}} \leq -2 \pi \Im (\tau_v).$$ Since $\log N_{K/{\mathbb{Q}}} \Delta_{E/K} \geq 0$ and $\log \Im (\tau_v) \leq \frac{1}{e} \Im (\tau_v)$, we obtain $$\sum_{v|\infty} n_v \Im (\tau_v) \leq \frac{12}{2\pi - 6/e} [K:{\mathbb{Q}}] h \leq 3[K:{\mathbb{Q}}] h.$$ Putting this inequality into (\[prodimto\]), we obtain $\prod_{v|\infty} \sqrt{\Im (\tau_v)} \leq (3[K:{\mathbb{Q}}]^2 h )^{[K:{\mathbb{Q}}]}$ and we can conclude.
$\Box$
For the proof of Lemma \[local-falt\] we use the two following lemmas.
\[omega-omega1\]For an archimedean place $v$ we have $$\label{omega1}
|\eta|_{v} = |\Omega_{1,v}| \sqrt{\det\Im (\tau_{v})}.$$
[*Proof.*]{} Using the definition of the metric and the inverse map of $\phi$ we compute $$|\eta|_{v}^{2} = (i/2)^{g} \int _{A(\overline{K_{v}})}\eta \wedge \overline {\eta} = (i/2)^{g}\int_{{\mathbb{C}}^{g}/\Lambda_{v}} dz \wedge \overline{dz} = (i/2)^{g}\int_{{\mathbb{C}}^{g}/\Omega_{1,v}({\mathbb{Z}}^{g} + \tau_{v} {\mathbb{Z}}^{g})} dz \wedge \overline{dz}$$ $$= (i/2)^{g}\int_{{\mathbb{C}}^{g}/{\mathbb{Z}}^{g} + \tau_{v} {\mathbb{Z}}^{g}}|\det \Omega_{1,v}|^{2} dz' \wedge \overline{dz'} = (i/2)^{g} |\det \Omega_{1,v}|^{2} \int_{{\mathbb{C}}^{g}/{\mathbb{Z}}^{g} + \tau_{v} {\mathbb{Z}}^{g}} (-2i dx \wedge dy)$$ $$= |\det \Omega_{1,v}|^{2} \det \Im(\tau_{v}).$$ For the last equality we use that $\int_{{\mathbb{C}}^{g}/{\mathbb{Z}}^{g} + \tau_{v} {\mathbb{Z}}^{g}} dx \wedge dy$ is the area of a fundamental domain for ${\mathbb{C}}^{g}/{\mathbb{Z}}^{g} + \tau_{v} {\mathbb{Z}}^{g}$, which is $(1/2^g) \det |\overline{\tau_v} - \tau_v| = \det \Im (\tau_v)$. $\Box$
\[produit-cv\] The product of the local periods satisfies the following equality $$\label{prod-cv}
\prod_{v|\infty} c_{v} = \prod_{v \,\textrm{real}}\frac{ 2^{\epsilon_v} }{\sqrt{\det \Im(\tau_{v}})}\cdot \prod_{v|\infty} ||\eta||_{v}.$$
[*Proof.*]{} For a real place $v$ we can prove ([@manin lemma 8.8]) that $$c_{v} = \int _{A({\mathbb{R}})} |\eta| \mu_v^g = 2^{\epsilon_v} \cdot \left|\det \left(\int_{\gamma_{1,v}} \omega_{j}\right)_{1\leq i,j\leq g} \right| = 2^{\epsilon_v}\cdot |\det \Omega_{1,v}|.$$ From Lemma \[omega-omega1\] we obtain, for $v$ real, $$c_{v} = \frac{2^{\epsilon_v}}{\sqrt{\det \Im(\tau_{v})}} |\eta|_{v}= \frac{2^{\epsilon_v}}{\sqrt{\det \Im(\tau_{v})}} ||\eta||_{v}.$$ For $v$ complex, we have $$||\eta||_{v} = |\eta|_{v}^{2} = (i/2)^g \int_{A({\mathbb{C}})}\eta \wedge \overline{\eta} = (i/2)^g \int_{A({\mathbb{C}})}(-2i)^g dx \wedge dy = \int_{A({\mathbb{C}})} dx \wedge dy,$$ where $dx \wedge dy$ is the Lebesgue measure on $A({\mathbb{C}})$. Since $|\eta| \mu_v^g$ is also the Lebesgue measure on $A({\mathbb{C}})$, then $$||\eta||_{v} = \int_{A({\mathbb{C}})}|\eta| \mu_v^g = c_v.$$ $\Box$
[*Proof of Lemma \[local-falt\].*]{} To compute the degree of ${\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}$ (which we have identified with $H^{0}({\mathcal{A}}, {\Omega_{\mathcal{A}/{\mathcal{O}_{K}}}^{g}}) = \eta {\mathfrak{a}}$, and since, by the product formula, $\sum_{v \in M_{K}}\log ||\eta||_{v} = \sum_{v \in M_{K}}\log ||k \eta||_{v} $, for all $k$ in $K$), we choose the invariant differential $\eta$: $$[K:{\mathbb{Q}}] h_{Falt}(A/K)=\deg_{Ar}({\omega_{\mathcal{A}/{\mathcal{O}_{K}}}}, ||.||) = \log |\eta {\mathfrak{a}}/\eta {\mathcal{O}_{K}}| - \sum _{v |\infty} \log ||\eta||_{v}.$$ Since $|\eta {\mathfrak{a}}/\eta {\mathcal{O}_{K}}| = | {\mathfrak{a}}/ {\mathcal{O}_{K}}| = |{\mathcal{O}_{K}}/{\mathfrak{a}}^{-1}| = N_{K/{\mathbb{Q}}}({\mathfrak{a}}^{-1})$, then $$[K:{\mathbb{Q}}] h_{Falt}(A/K) = -\log N_{K/{\mathbb{Q}}}({\mathfrak{a}}) -\log(\prod_{v| \infty}||\eta||_{v}).$$ We conclude applying Lemma \[produit-cv\]. $\Box$
In the one-dimensional case, using the results of L. Merel [@merel] and P. Parent [@parent.torsion1999] we can obtain an uniform bound for the cardinality of the torsion part of the Mordell-Weil group. In fact, Merel’s result tell us which prime numbers could divide $|E(K)_{tors}|$ and Parent’s result give us a bound for the powers of these primes, independent on the power.
\[lemma.torsion.ellip\] For every integral number $d \geq 1$ there is a positive number $B({d})$ such that for every number field $K$ with $[K:{\mathbb{Q}}] \leq d$ and every elliptic curve $E$ defined over $K$ we have $$\label{merel'sbound}
|E(K)_{tors}| \leq B({d}).$$ On may take $B(d) = (129. (5^d-1)(3d)^6)^{\frac{(1+3^{d/2})^8}{2\log (1+3^{d/2})}}$.
For the convenience of the reader we give the details of the proof of Lemma \[lemma.torsion.ellip\]. Before the proof, we state an analytic lemma which will be used therein.
\[lemma.analytic\] For $n \geq 1$, denote $p_1, p_2, \ldots, p_n$ the $n$ first prime numbers. As usual, denote $\theta(p_n)= \sum_{i=1}^n \log p_i$. For every $n\geq 1$, one has $$n \leq 4\, \frac{\theta(p_n)}{\log \theta(p_n)}.$$
[*Proof.*]{} Remark (see, e.g., [@ellison page 25]) that for every $n\geq 1$, one has $p_n \geq n \log n$. Furthermore, for $n \geq 2$, one has $\sum_{i=1}^n \log i \geq \int_1^n \log x dx = n \log n -n +1$ and $\sum_{i=2}^n \log (\log i) >\log \log 2$. From these remarks we deduce that, for $n\geq 2$, $$\theta(p_n) = \log 2 + \sum_{i=2}^n \log p_i > \log 2 + \sum_{i=2}^n \log (i \log i) >\log 2 + n \log n - n + 1 + \log \log 2.$$ Let $n\geq 4$. Then $\theta(p_n) > \frac{1}{2}\,n \log n \geq e$ and, since for $x \geq e$, the fonction $x \mapsto \frac{x}{\log x}$ growths, then $\frac{\theta(p_n)}{\log \theta(p_n)} \geq \frac{\frac{1}{2}\,n \log n}{\log(\frac{1}{2}\,n\log n)}$. Moreover, $\frac{\log n + \log \log n - \log 2}{\log n} = 1 + \frac{\log \log n}{\log n} - \frac{\log 2}{\log n} \leq 1 + \frac{1}{e}$. Thus $$n \leq 2\,\left(1 + \frac{1}{e}\right) \frac{\theta(p_n)}{\log \theta(p_n)}.$$ We easily check that for $n =1, 2$ and $3$ one also has $n \leq 4 \frac{\theta(p_n)}{\log \theta(p_n)}$. $\Box$
[*Proof of Lemma \[lemma.torsion.ellip\].*]{} Following a result of L. Merel, if there is an element in $E(K)_{tors}$ of order a prime number $p$, then $p \leq m(d)$. The theorem of [@merel] gives $m(d) = d^{3d^2}$; but this bound was improved by J. Oesterlé (in an unpublished article) by $m(d) = (1+3^{d/2})^2$. We will use here Oesterlé’s bound. Let us denote $p_1 < ... < p_m$ the first $m$ prime numbers, where $m$ satisfies $p_m \leq m(d)$ and $p_{m+1} > m(d)$. Then, for $i \in \{1, \ldots , m\}$, there exist some $n_i \geq 0$, such that $|E(K)_{tors}| \leq p_1^{n_1} ... p_m^{n_m}$. We have $\theta(p_m) = \log (p_1... p_m) \leq m \log m(d)$. Applying Lemma \[lemma.analytic\], we deduce that $$m \leq
\frac{m(d)^4}{\log m(d)} = \frac{(1+3^{d/2})^8}{2\log (1+3^{d/2})}.$$ From [@parent.torsion1999 Theorem 1.2], we now that, for every $p \in \{p_1, \ldots, p_m\}$ and every non zero integer $n$, $$p^n \leq c(d) = 129. (5^d-1)(3d)^6.$$ (In fact, Parent’s result is even more precise; it gave better bounds for $p^n$ depending if $p$ equals 2, 3 or not.) We conclude that $$|E(K)_{tors}| \leq c(d)^m \leq (129. (5^d-1)(3d)^6)^{\frac{(1+3^{d/2})^8}{2\log (1+3^{d/2})}}.$$ $\Box$
In higher dimension we do not have anymore such a uniform bound. (It is conjectured a bound $B({d, g})$ depending on the degree and the dimension.) However we can get the following bound, which is sufficient for our purpose.
\[lemma-torsion\] There exists a positive number $C_{tors}$ such that for every abelian variety $A/K$ we have $$|A(K)_{tors}| \cdot |{\check{A}}(K)_{tors}| \leq (C_{tors} \cdot \log N_{K/{\mathbb{Q}}}(\mathcal{F}_{A/K}))^{4g[K:{\mathbb{Q}}]}.$$
[*Proof.*]{} Put $N = N_{K/{\mathbb{Q}}}(\mathcal{F}_{A/K})$ the norm of the conductor of $A$. As usual, let us denote $\omega(N)$ the number of prime numbers dividing $N$ and $\pi(X)$ the number of prime numbers $\leq X$. By the Prime Number Theorem, there exists some absolute constant $c_1$ such that, for $X$ large enough, $\pi(X) \geq c_1 \frac{X}{\log X}$. Furthermore, there exists another absolute constant $c_2$ such that $\omega(N) \leq c_2 \frac{\log N}{\log \log N}$. Take $X = C \log N$, where $C$ is a positive number large enough to satisfies $\pi(C\log N) \geq c_1 \frac{C\log N}{\log(C\log N)}$. Then $\pi(X) -\omega(N) \geq c_1 C \frac{\log N}{\log \log N + \log C} -c_2 \frac{\log N}{\log \log N}$. Since $\frac{\log N}{\log \log N}$ tends to infinity when $N$ tends to infinity, we can always choose $C$ such that $\pi(C\log N) -\omega(N) \geq 2$. We can then take two distinct primes numbers, $p$ and $q$, coprime with $N$ and $\leq C \log N$. Let ${\mathfrak{p}}$ and ${\mathfrak{q}}$ be ideals of $K$ lying above $p$ and $q$ and denote $v$ and $w$ the corresponding places of $K$. Since $p$ and $q$ are coprime with $N$, the ideals ${\mathfrak{p}}$ and ${\mathfrak{q}}$ do not appear in the conductor of $A$. Then $A$ has good reduction at ${\mathfrak{p}}$ and ${\mathfrak{q}}$ ([@serre-tate Theorem 1]). Denote $A_{v}$ and $A_{w}$ the reduced varieties and $k_{v}$ and $k_{w}$ the residual fields. Then using the injection $$A(K)_{tors} \hookrightarrow A_{v}(k_{v}) \times A_{w}(k_{w})$$ we deduce that $|A(K)_{tors}| \leq (N_{K/{\mathbb{Q}}}({\mathfrak{p}}) \cdot N_{K/{\mathbb{Q}}}({\mathfrak{q}}))^{g} \leq (pq)^{g[K:{\mathbb{Q}}]}
\leq (C \log N)^{2 g[K:{\mathbb{Q}}]}$. We proceed in the same way for $|{\check{A}}(K)_{tors}|$. Since the conductor of ${\check{A}}$ is the same as the conductor of $A$ ([@serre-tate Corollary 2]), we can conclude.
$\Box$
For an abelian variety $A/K$, denotes $g$ the dimension, $r= {\mathrm{rk}}(A(K))$ the Mordell-Weil rank, ${\mathcal{F}}= N_{K/{\mathbb{Q}}} {\mathcal{F}}_{A/K}$ the absolute value of the norm of the conductor, $h= h_{Falt}(A/K)$ the Faltings’ height, $H = \exp\{h\}$ its exponential, ${\bf \Sha}= |\Sha(A/K)|$ the order of the Tate-Shafarevich group and $R= {\mathrm{Reg}}(A/K)$ the canonical regulator.
\[prop-sha.reg\]
Suppose that Conjecture \[funct-eq\] and Conjecture \[bsd\] hold for the abelian variety $A/K$. Then, with the notations above, $$\label{borne-sha.reg}
{\bf \Sha} \cdot R\leq c_{[K:{\mathbb{Q}}],g} \cdot 2^{r} \cdot D_{K}^{g} \cdot {\mathcal{F}}^{\frac{1}{4}} \cdot (\log {\mathcal{F}})^{4g[K:{\mathbb{Q}}]} \cdot
(\log ({\mathcal{F}}\cdot D_{K}^{2g}))^{2g[K:{\mathbb{Q}}]} \cdot (H\cdot h^g)^{[K:{\mathbb{Q}}]},$$ where $c_{[K:{\mathbb{Q}}], g}$ depends at most on the degree $[K:{\mathbb{Q}}]$ and the dimension $g$.
[*Proof.*]{} Apply to the formula (\[formula-bsd\]) of the BSD-conjecture the bound (\[bound-coeff-cond\]) of Lemma \[coeff-dominant\], Lemma \[local-falt-ineg\] and Lemma \[lemma-torsion\]. $\Box$
Using the bound (\[bound-coeff-rank\]) of Lemma \[coeff-dominant\], instead of (\[bound-coeff-cond\]), we obtain a bound independent of the rank. This bound is particularly interesting when the dimension $g$ is 1 or 2 and one is interested on the dependence on the discriminant of the number field (see the Remark \[remark-rank\]).
\[prop-sha.reg-ellip\] Suppose that Conjecture \[funct-eq\] and Conjecture \[bsd\] hold for the elliptic curve $E/K$. Then $$\label{borne-sha.reg-ellip}
|\Sha(E/K)| \cdot {\mathrm{Reg}}(E/K) \leq C_{d} \cdot D_K^{\frac{3}{2}} \cdot {\mathcal{F}}^{\frac{1}{2}} \cdot (H\cdot h)^{d},$$ where $d = [K:{\mathbb{Q}}]$ and $C_{d} = \left(\frac{9}{2\pi}\right)^{d} \cdot (3 d^2)^{d} \cdot (129. (5^d-1)(3d)^6)^{\frac{(1+3^{d/2})^8}{\log (1+3^{d/2})}}$.
A similar result is obtained in [@these.remond Proposition A.2.3, Annexe A], for an elliptic curve in the case $K= {\mathbb{Q}}$.
[*Proof of Proposition \[prop-sha.reg-ellip\].*]{} Apply to the formula (\[formula-bsd\]) of the BSD-conjecture the bound (\[bound-coeff-rank\]) of Lemma \[coeff-dominant\], Lemma \[local-falt-ineg\] and Lemma \[lemma.torsion.ellip\]. $\Box$
*[ Since $\Sha(A(K))$ is conjectured to be finite, its order is greater than 1 and the bounds of Propositions \[prop-sha.reg\] and \[prop-sha.reg-ellip\] are still valid for ${\mathrm{Reg}}(A(K))$.]{}*
*[ If one would like also to deduce from the BSD-conjecture a lower bound for the product of the order of the Tate-Shafarevich group and the canonical regulator, one would be confronted with the problem of estimate from above the product $\prod_v c_v$ of the local numbers at the finite places and also with the problem of giving a lower bound for $L^{*}(A/K,1)$. For the local numbers $c_v$, this could be done, under Szpiro’s conjecture, when $A$ is a jacobian variety (see [@hindry.mordell-weil Lemma 3.5]). The question for the $L$-series seems also difficult (in the case $g=1$ and $K={\mathbb{Q}}$ see the proof of Theorem 2 of [@goldfeld-szpiro]).]{}*
Geometry of numbers {#section-geom.nbers}
===================
The Néron-Tate height $\hat{h}$ on $A(K)$ extends to a positive definite quadratic form on $A(K) \otimes_{{\mathbb{Z}}} {\mathbb{R}}$. Thus we have a lattice $A(K)/A(K)_{tors}$ sitting inside a Euclidean space $A(K) \otimes_{{\mathbb{Z}}} {\mathbb{R}}= {\mathbb{R}}^{r}$ with inner product $<,>$, and the canonical regulator ${\mathrm{Reg}}(A(K))$ is the square of the volume of the fundamental domain for the lattice. Putting together Minkowski’s theorem on the successive minima [@cassels Theorem V, Chapter VIII, section 4.3] with Lemma 8 page 135 of [@cassels], as [@remond.sous2005 Lemma 5.1], we can choose a basis $\{P_1, \ldots, P_r\}$ for the torsion free part of the Mordell-Weil group satisfying $\hat{h}(P_1) \leq \ldots \leq \hat{h}(P_r)$, and $$\label{minkowski}
\prod_{i=1}^{r}\hat{h}(P_{i}) \leq (r!)^4 {\mathrm{Reg}}(A(K)).$$
Thus, in order to bound from below the regulator of the variety it sufficies to give a lower bound for the $\hat{h}(P_i)'s$. In the same way, a lower bound for the product $\prod_{i=1}^{r-1}\hat{h}(P_i)$ of the $(r-1)$ first heights of the generators together with an upper bound for the regulator gives us an upper bound for the greatest height $\hat{h}(P_r)$. Thus, it will be sufficient to bound from below the smallest height $\hat{h}(P_{1})$. In section \[section-lower-non-torsion\], we quote some results about lower bounds for the height of non-torsion points.
Lower bounds for the height of non-torsion points {#section-lower-non-torsion}
=================================================
A rational point of the abelian variety has Néron-Tate height zero if and only if it is a torsion point. We are interested in lower bounds for the height of the elements of a basis of the free part of the Mordell-Weil group of the variety and more generally for points of infinite order. There are two different directions on which these kind of lower bounds are studied. Let $A/K_{0}$ be an abelian variety defined over a number field. Let $K/K_{0}$ be any finite extension of the ground field $K_{0}$ and let $P$ in $A(K)$ be a non-torsion point. In the first case, $A/K_0$ is fixed and the dependence on the degree $[K:K_{0}]$ is the main interest. This is a Lehmer-type problem. In [@bosser-surroca] lower bounds of the first kind are used. This is because $E/K_0$ is fixed, while the field $K$, which is the field of rationality of the point $P$, varies. In the second case, the accent rely in the dependence on the variety $A/K_0$. For this last one, there is a conjecture of S. Lang [@lang.dio.an page 92]: [*for every elliptic curve $E/K$, there is a positive number $c_K$ depending only on the degree $[K:{\mathbb{Q}}]$, such that, for all non-torsion points $P$ in $E(K)$,*]{} $$\label{lower.silv}
\hat{h}(P) \geq c_K \cdot \log N_{K/{\mathbb{Q}}}\Delta_{E/K}.$$ J. Silverman [@silverman.duke1984] proved Lang’s conjecture for an elliptic curve with integral $j$-invariant and generalised it to higher dimension [@silverman.duke1984]. (M. Hindry and J. Silverman [@hindry-silverman-lowerbound Theorem 0.3] proved such a lower bound for all elliptic curves, with the constant $c_K$ replaced by some function on the Szpiro ratio $\sigma_{E/K} = \frac{\log N_{K/{\mathbb{Q}}} \Delta_{E/K} }{\log N_{K/{\mathbb{Q}}} \mathcal{F}_{E/K}}$. This function, which is explicit, decreases with $\sigma_{E/K}$. This result show that the Szpiro’s conjecture implies Lang’s conjecture.)
Here we consider the second kind of bounds because we put the accent on the variety. Concerning this problem, D. Masser [@masser.LNM1290 Corollary 1] proved that *for every $K/K_{0}$, there exists a real number $c_{[K:{\mathbb{Q}}]}$ depending on $[K:{\mathbb{Q}}]$ such that, for all non-torsion points $P$ in $A(K)$, one has* $$\label{borne.lang.masser}
\hat{h}(P) \geq c_{[K:{\mathbb{Q}}]}\cdot h_{Falt}(A/K_{0})^{-(2g+1)}.$$ In fact, D. Masser proved a more precise bound, which is even more precise than the one obtained replacing here $h_{Falt}$ by the [*stable*]{} Faltings’ height. However, this bound is enough for our application. (S. David [@david.bull.soc.93 Theorem 1.4] gives an explicit bound. His bound, valid for certain families of abelian varieties under some hypothesis, could tends to infinity when the height of the variety tends to infinity. See the comments on page 515 of [*loc. cit.*]{}.)
On the generators of the Mordell-Weil group and the order of the Tate-Shafarevich group {#section-bounds}
=======================================================================================
In this last section, we give the proofs of Theorem \[langs-conj\] and Theorem \[BSD+Sz-Sha\] and comment on these results.
[*Proof of Theorem \[langs-conj\].*]{} Applying to the inequality (\[minkowski\]), obtained from Minkowski’s theorem on successive minima, the lower bound (\[borne.lang.masser\]) for $\hat{h}(P_{1})$ and the conditional upper bound (\[borne-sha.reg\]) for the regulator we obtain the theorem. $\Box$
When $g=1$ and $K = {\mathbb{Q}}$, the bound of Theorem \[langs-conj\] should be compared with Lang’s conjecture (\[lang’sconj\]). S. Lang obtained a factor $e^{r^2}$ and he could not reduced it to $e^r$, as remarked by himself in [@lang.conj.dio Note on p. 170]. Our bound gives a factor which growths with $r$ as $e^{4r \log r + c r}$. This is because we use Minkowski’s theorem, instead of Hermite’s one. Concerning the height of the variety, we have a supplementary factor: $h^{(2g+1)(r-1)}\cdot h^{g[K:{\mathbb{Q}}]}$. The factor $ h^{g[K:{\mathbb{Q}}]}$ comes from the Matrix Lemma. The factor, $h^{(2g+1)(r-1)}$ comes from the lower bound for non-torsion points. To bound the height of non-torsion points, in the one-dimensional case, S. Lang used his conjectured bound (\[lower.silv\]) and compared the discriminant of the curve with its conductor. As for the dependence on the conductor, we obtain ${\mathcal{F}}^{\frac{1}{4} + \epsilon({\mathcal{F}})}$, where $\epsilon$ depends only on $g$ and $K$ and $\epsilon({\mathcal{F}})$ tends to $0$ when ${\mathcal{F}}$ tends to infinity. Contrary to this, S. Lang suggested ${\mathcal{F}}^{\epsilon({\mathcal{F}})}\cdot (\log {\mathcal{F}})^r$, where $\epsilon({\mathcal{F}})$ tends to $0$ when ${\mathcal{F}}$ tends to infinity. This is because, for bounding the leading coefficient of the $L$-function, he avoided the use of the functional equation, which he replace by some hypothetical bound of his own, inspired by the Riemann hypothesis on the zeta function and some analytic estimates.
We remark that we can deduce a lower bound for the regulator from inequality (\[minkowski\]) and the lower bound (\[borne.lang.masser\]) for non-torsion points: $${\mathrm{Reg}}(A/K) \geq (r!)^4\cdot (c_{[K:{\mathbb{Q}}]})^r \cdot h_{Falt}(A/K)^{-r(2g+1)}.$$ With Proposition \[prop-sha.reg\] we obtain an upper bound for the order of the Tate-Shafarevich group: $$|\Sha(A/K)| \leq c_{[K:{\mathbb{Q}}],g}\cdot (r!)^4\cdot 2^r\cdot (c_{[K:{\mathbb{Q}}]})^r \cdot D_K^g \cdot {\mathcal{F}}^{\frac{1}{4}}\cdot (\log {\mathcal{F}})^{4g[K:{\mathbb{Q}}]}\cdot (\log ({\mathcal{F}}\cdot D_{K}^{2g}))^{2g[K:{\mathbb{Q}}]} \cdot$$ $$\label{borne-naive-sha}
\cdot H^{[K:{\mathbb{Q}}]}\cdot h^{g[K:{\mathbb{Q}}] + r(2g+1)}.$$ Even if this is not made explicitly here, we would like to point out that there should be an inequality of the form $ {\mathcal{F}}\ll H^{12}$. For an elliptic curve this is quite obvious because the Faltings’ height is linked with the minimal discriminant. In higher dimension, the implied constant in $\ll$ would depend at least on $g$ and $K$. With this inequality, we could deduce from (\[borne-naive-sha\]) an upper bound for $|\Sha(A/K)|$ which is independent of ${\mathcal{F}}$ and growths in the height as $$H^{3 + [K:{\mathbb{Q}}]} \cdot h^{7 g [K:{\mathbb{Q}}] + r (2g + 1)}.$$ On the other hand, an inverse inequality between the height and the conductor would lead to an upper bound as a function in ${\mathcal{F}}$, $r$, $K$ and $g$. This inequality is predicted by the following conjecture.
\[gral.szpiro\] Let $A$ be an abelian variety of dimension $g$ defined over a number field $K$. There exists real numbers $c_{1}$ and $c_{2}$ depending at most on $g$ and $K$ such that $$h_{Falt}(A/K) \leq c_{1} \log N_{K/{\mathbb{Q}}}({\mathcal{F}}_{A/K}) + c_{2}.$$
Looking at the function field analog and a theorem of P. Deligne, M. Hindry [@hindry.mordell-weil] suggest that we may take $c_{1}= \left(\frac{g}{2} + \epsilon \right)$, for every $\epsilon >0$. Playing with restriction of scalars, he adds: $c_{2} = (g^{2} + \epsilon) \log D_{K} + c_{\epsilon, [K:{\mathbb{Q}}]}$, where $c_{\epsilon, [K:{\mathbb{Q}}]}$ depends only on $\epsilon$ and $[K:{\mathbb{Q}}]$.
[*Proof of Theorem \[BSD+Sz-Sha\].*]{} Applying Conjecture \[gral.szpiro\] to (\[borne-naive-sha\]) and replacing $c_{1}$ by $\left(\frac{g}{2} + \epsilon \right)$ and $c_{2}$ by $(g^{2} + \epsilon) \log D_{K} + c_{\epsilon, [K:{\mathbb{Q}}]}$, we obtain the theorem. $\Box$
When the dimension of the variety is 1 and the number field is ${\mathbb{Q}}$, Theorem \[BSD+Sz-Sha\] gives $|\Sha(E/{\mathbb{Q}})| \ll {\mathcal{F}}^{1/4 + c + \gamma({\mathcal{F}})}$, where $c= 1/2 + \epsilon$, the function $\gamma$ depends on $r$ and $\gamma({\mathcal{F}})$ tends to $0$ when ${\mathcal{F}}$ tends to infinity. The bound of Theorem 1 of [@goldfeld-szpiro], which we have quoted in the introduction, is $|\Sha(E/{\mathbb{Q}})| \ll {\mathcal{F}}^{1/4 + c + \gamma({\mathcal{F}})}$, with $c= 3/2$ and $\gamma({\mathcal{F}})$ tends to $0$ when ${\mathcal{F}}$ tends to infinity. They expected ([@goldfeld-szpiro page 75]) $c > 1/2$, as we obtained. The difference between the numbers $c$ is because they use the lower bound for the period: $\Omega \gg D^{-4} \gg H^{-3}$, where $D$ is the minimal discriminant of the curve (see [@goldfeld.banff1988 page 168]), while our Lemma \[local-falt-ineg\] gives: $c_{\infty} \gg h^{-1}\cdot H^{-1}$.
In the same paper D. Goldfeld and L. Szpiro proved [@goldfeld-szpiro Theorem 2] a sort of reciprocal statement. Precisely, they proved that if their conjectured bound (\[conj.g-sz.sha<cond\]) holds for every elliptic curve over ${\mathbb{Q}}$, then a weak version of Szpiro’s conjecture holds for every elliptic curve defined over ${\mathbb{Q}}$. The proof use the BSD-conjecture for all elliptic curves over ${\mathbb{Q}}$, but just in the case of rank zero, which is a theorem. It would be interesting to investigate if this result can also be generalized to any number field or to higher dimension.
[**Acknowledgments.** ]{} I would like to thank Carlo Gasbarri, Marc Hindry and Henri Darmon for some useful references and remarks concerning this work and Jean-Benoît Bost for pushing me to improve the presentation of the results. I would also like to thank the universities of Roma 2 and Roma 3 for the hospitality during my visits to Rome.
> Andrea Surroca Ortiz\
> ETH Zurich\
> surroca@math.ethz.ch
[^1]: Supported by a Marie Curie Fellowship of the European Community
|
---
abstract: 'Cooperative ring-exchange is suggested as a mechanism of quantum melting of vortex lattices in a rapidly-rotating quasi two dimensional atomic Bose-Einstein condensate (BEC). Using an approach pioneered by Kivelson [*et al*]{}. \[Phys. Rev. Lett. [**56**]{}, 873 (1986)\] for the fractional quantized Hall effect, we calculate the condition for quantum melting instability by considering large-correlated ring exchanges in a two-dimensional Wigner crystal of vortices in a strong ‘pseudomagnetic field’ generated by the background superfluid Bose particles. BEC may be profitably used to address issues of quantum melting of a pristine Wigner solid devoid of complications of real solids.'
address: |
$^1$The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600 113, India.\
$^2$ The Abdus Salam International Centre for Theoretical Physics, 34100 Trieste, Italy.
author:
- 'Tarun Kanti Ghosh$^{1,2}$ and G. Baskaran$^{1}$'
title: |
Cooperative Ring Exchange and\
Quantum Melting of Vortex Lattices in Atomic Bose-Einstein Condensates
---
[2]{}\[\]
Introduction
============
The creation and observation of the triangular vortex lattices in a rapidly-rotating atomic Bose-Einstein condensate (BEC) [@mad; @abo; @eng] has opened a new direction for the study of quantum vortex matter. Theoretical predictions[@gunn] for the existence of fractional quantum Hall like states at even higher rotational speeds in quasi two dimensional atomic BEC has given a further impetus to this fascinating field. The quantum melting of an ordered vortex lattice to an exotic quantum fluid of atoms at very low temperatures is a quantum phase transition, where one would like to understand the mechanism of melting and nature of phase transition.
Melting of classical solids with short range inter atomic potential in 2D is a well studied subject, where topological defects play a fundamental role. In the presence of long range interaction, such as one component coulomb plasma in 2D, melting is dominated by ring exchanges [@choq] rather than topological defects. From this point of view the logarithmic repulsion among the imposed vortices in a rotating BEC provides an opportunity to study quantum melting of a ‘pristine’ Wigner solid with long range forces, that is free from the complications of solid state systems.
In this letter we write down an effective Hamiltonian for the vortex degrees of freedom, motivated by an analogy [@thouless; @don] between the Magnus force acting on a vortex moving on a two dimensional neutral superfluid fluid and the Lorenz force acting on a charged particle in a magnetic field. We develop a theoretical approach, borrowing heavily from pioneering ideas of Kivelson, Kallin, Arovas and Schrieffer (KKAS) [@kiv], developed in the context of fractional quantized Hall effect (FQHE), and suggest a cooperative ring exchange (CRE) mechanism for quantum melting of vortex lattices in quasi 2D atomic BEC and indicate a possible direction for a microscopic understanding of the quantum liquid of molten vortices.
In contrast to many recent theoretical works on atomic BEC which exploits an analogy between the Hamiltonian of a rotating neutral boson atoms and charge particle in an external magnetic field in two dimensions, our work uses the vortex (collective) coordinates directly and provides another microscopic approach to understand quantum melting and the quantum Hall-like state that may be formed in these atomic system. Existing theoretical works focus on exact diagonalization [@cooper] of small number of atoms to get some idea about quantum melting and the possible quantum Hall like melted states. A recent interesting work [@mac] that studies melting of vortex lattices in a rapidly-rotating 2D BEC, also shows that BEC is destroyed by the vortex lattices.
Experimentally, at the present moment it is a challenging task to produce vortex liquid state in a rapidly-rotating atomic BEC, in contrast to the formation of an incompressible liquid state of electrons in a high magnetic field at higher filling fractions. With the rapid advances in the field of laser cooled atomic gases one can anticipate to get ‘snapshots’ of the melted configurations of the vortex lattices, where CRE should leave its unique signatures as we mention at the end.
In the cooperative ring exchange approach to FQHE, KKAS view the Laughlin quantum Hall state as a Wigner solid of electrons in 2D in strong magnetic field, that has been quantum melted by cooperative ring exchange processes. Briefly, ring exchange, as the name suggests, is a cooperative shift of a ring of contiguous particles in an ordered lattice (figure 1) resulting in a cyclic permutation within the ring. While the amplitude for a quantum tunneling event of a specific ring of size $L$ sites is exponentially small $\sim \alpha^L $ (with $\alpha $, the single particle tunneling amplitude being $<1$), the number of rings of size $L$ is exponentially large $\sim e^{bL}$. Thus the total amplitude $\sim \alpha^L e^{bL}$ may exponentially diverge, if $ - \ln \alpha < b $, leading to a proliferation of ring exchanges and a consequent quantum melting.
This melting depends on the electronic filling fraction, the ratio between the density of conduction electrons and the density of flux quanta. For very low filling fraction, electrons are expected to form a Wigner crystal. At higher filling fraction, electrons forms an incompressible liquid state and exhibits quantized Hall effect. Similarly, we could also expect the quantum melting of the vortex lattices depends on the vortex filling factor, the ratio of the total number of vortex to the total number of boson.
Hamiltonian of the vortices in a rotating quasi-2D BEC
=======================================================
We consider a large number of vortices in a rapidly rotating quasi-$2D$ BEC; the condition for the condensate in a quasi-2D trap is $ \mu = \rho_0 g_2 < \hbar \omega_z $ and the atoms in the condensate should be in the atomic lowest Landau level generated by the fast external rotation is $ \rho_0 g_2 < 2 \hbar \Omega $. Here, $ \rho_0 $ is the boson density, $ g_2 = 2 \sqrt{2\pi} \hbar \omega_z a_z a $ is the effective interaction strength in quasi-2D Bose system [@tin], where $ a_z = \sqrt{\frac{\hbar}{m_o\omega_z}}$ is harmonic oscillator length along the $z$-direction with $ \omega_z$ is trap frequency in the axial ($z$) direction and $ \Omega $ is the trap rotational frequency. Also, $ a$ is the $s$-wave atomic scattering length. A vortex in a fluid is an excitation in which each fluid particle is given an angular momentum $m$ relative to the vortex center. Here, we treat a vortex as a point particle moving under the influence of the Magnus force. The Magnus force is an effective interaction between superfluid particles and vortices in relative motion[@thouless; @don]. The force acting on a single vortex[@thouless] is then $${\bf F} = {\bf v} \times \hat{z} (2\pi \hbar \rho_0).$$ Here, $ {\bf v}$ is the vortex velocity relative to the superfluid particles and $ \rho_0 $ is the superfluid particle density. The Magnus force is equivalent to the Lorentz force acting on a charge particle ($ e $) in a magnetic field. Hence, $ e B_{\rm eff} = 2 \pi \hbar \rho_0 $ is the pseudo magnetic field.
The interaction potential between two vortices separated by a distance $r$ is V(r) = - ( ), where $ \xi \sim \sqrt{\frac{a_z}{\rho_0 a}} $ is the coherence length of the vortex core and $ m_0 $ is the mass of a superfluid particle. The above potential is valid only when the distances between two vortices is greater than the coherence length. Notice that the interaction strength between two vortices depends on the superfluid density as well as the $s$-wave scattering length $a$.
The Hamiltonian of a rotating BEC containing vortices can be written in terms of center of vortices (collective coordinate) as [@thouless] H\_v = \_[i=1]{}\^[N\_v]{} - \_[i<j]{} , where $ N_v $ is the total number of vortices. The effective vortex mass $ m_v = \pi \rho_0 \xi^2 $ can be in principle derived from a microscopic approach [@thouless]. Since the coherence length is very small, the vortex mass also becomes small. This Hamiltonian is similar to that of a charged particles moving under the influence of the Lorentz force by a magnetic field $ B_{\rm eff}$. The pseudo vector potential due to the Magnus force is ${\bf A}_{\rm eff} = - \frac{1}{2} {\bf r} \times {\bf B}_{\rm eff}$. For $ N_v $ number of vortices in an area $ A $, one gets the vortex filling factor, $\nu_v = \frac{N_v}{A} \frac{ h}{e B_{\rm eff}} = \frac{N_v}{N}$. Notice that the vortex filling factor $ \nu_v$ is just inverse of the bosonic filing factor $\nu_b = \frac{N}{N_v}$. For large $ N_v $ the vortex density is approximately uniform and $ N_v = \frac{2 m \omega A}{h} $. $ N $ is the number of the superfluid particles. The effective magnetic length is $ l_0 = \sqrt{\frac{\hbar }{ e B_{\rm eff}}} =
\frac{1}{\sqrt{2 \pi \rho_0}} $. The pseudomagnetic field generated by the background superfluid particles leads to the quantization of the cyclotron motion and producing Landau levels of the vortices. The eigen spectrum of the single vortex Hamiltonian are uniformly spaced with energy gap $ \hbar \omega_{\rm eff} $, where $ \omega_{\rm eff} = \frac{2\pi \hbar \rho_0}{ m_v} $ is the effective cyclotron frequency. The limit of $ m_v \rightarrow 0 $ and/or large superfluid density ($ \rho_0$) is equivalent to the vortices are in the lowest Landau level (LLL). We can project the Hamiltonian onto the LLL and the corresponding normalized wave functions are degenerate eigenfunctions of the angular momentum $ m $ is, (z) = z\^m e\^[-]{} , m = 0, 1, 2,.... where $ z = x+i y $ and $(x,y)$ are the position coordinate of a vortex. When the vortices are confined to the lowest Landau level,(i.e. the cyclotron degrees of freedom are confined to the LLL), the kinetic degrees of freedom of the vortices are frozen, since the spacing between Landau levels, $ \hbar \omega_{\rm eff}$, is large compared with all other energies in the problem. The remaining degrees of freedom are the vortex guiding-center coordinates, $ {\bf R} = \frac{{\bf r}}{2} + \frac{l_0^2}{\hbar} ({\bf p } \times \hat {z})$. The guiding center coordinate ${\bf R} $ specify the center of a Gaussian-localized probability amplitude of width $l_0$. These coordinates have no kinetic energy. Hence the vortices in the LLL will remain localized about a given guiding center coordinate $ {\bf R}$ indefinitely.
Coherent state path integration
===============================
In this section, we would like to review the coherent state path integral formalism. For detailed derivations, please consult the references [@sch; @gir]. In symmetric gauge, the wave function of a vortex in the LLL with guiding-center position $ {\bf R}$ is \_[[**R**]{}]{} ([**r**]{} ) = exp . It has the same form as a coherent state in a two-dimensional phase space [@sch]. Here, the state label $ {\bf R}$ is a continuous variable. The coherent state overlap is given by, = exp . This coherent state $ |{\bf R} > $ forms a nonorthogonal, overcomplete basis. Nevertheless, the projection operator $ P $ onto the LLL is given by, P = | [**R**]{} >< [**R**]{} | which is unity within the LLL since $ < {\bf R}_1 | P | {\bf R}_2 > = <{\bf R}_1 | {\bf R}_2> $.
We use the coherent state path integral [@sch; @gir] expression for the partition function to calculate the tunneling coefficient of a vortex. The partition function for $ 2D $ interacting vortices in a pseudomagnetic field due to the Magnus force is Z (\_v ) = Tr e\^[- H\_v ]{}. Here, we discuss the main features of this formalism for a single vortex in the LLL in the complex plane. This can be generalized for many vortex system very easily. The coherent state in the complex plane is | R> = exp exp , where $ R = X+iY $ is the guiding center coordinate of a vortex in the complex plane and the asterisk denotes the complex conjugation. The coherent state overlap in the complex plane is \[overlap\] <R\_j|R\_k>&=& exp\
&=& exp \[- \]. Now the path integral representation of the partition function $ Z (\nu_v) $ can be obtained in the usual way. First, we split the inverse temperature $ \beta $ into a large number of equal intervals $ \epsilon = \beta /n $, i.e., $ e^{- \beta V } $ is written as $ [e^{- \epsilon V}]^n $, and then insert the projection operator $ P $ at each infinitesimally small interval. Then, = \_[k=1]{}\^[n]{} \_[j=0]{}\^[n]{} < R\_[j+1]{} | e\^[- V ]{} | R\_[j]{}>, where $ R_0 = R_i, R_{n+1} = R_f$. In general, the Hamiltonian can be written V\_[j,k]{} = V(R\_j\^[\*]{}, R\_k) = . The matrix element can be written as, & = & \_[k=1]{}\^[n]{} \_[j=0]{}\^[n]{} < R\_[j+1]{} | R\_[j]{}>\
& & We are neglecting terms of $ O (\epsilon^2) $ and higher order terms by standard procedure. Using Eq.(\[overlap\]), and = we obtain Z(\_v) = \_[k=1]{}\^[n]{} exp , where S\_j =( R\_j\^[\*]{} - R\_[j+1]{} ) - V (R\_j\^[\*]{}, R\_[j+1]{}). The above path integral can be written as Z = D\[[**R**]{}\] e\^[i S\[[**R**]{}\]]{}, where S = \_[0]{}\^ dt . This action is linear in time derivatives and hence discontinuous paths have finite action. It implies that the coherent state path integral is dominated by discontinuous paths and the limits is ill defined. Despite these difficulties, the continuum version of the path integral can be used to develop a saddle-point approximation for the partition function [@kiv]. We are interested in the semiclassical limit when $ V(R) $ is a slowly varying function of its argument over the length scale $ l_0 $ and we can use the saddle point approximation to evaluate the path integral.
The single vortex path integral can be generalized to many vortex path integral directly. The action for many vortex is S ([**R**]{}) = \_[0]{}\^ dt , where \[pot\] V (R) = \_[[**R**]{}]{}([**r**]{})| V([**r**]{}) | \_[[**R**]{}]{} ([**r**]{}) is two-body interaction potential in coherent states representation.
In the saddle point-approximation, the classical path is obtained by minimizing the action, $ \frac{\delta S}{\delta R_j(\tau)}|_{R=R_c} = 0 $. The classical paths satisfy the following equations of motion \[dy\] \_j = (\_j V\_j) z, [where]{} V\_j = \_[k j ]{} V([**R**]{}\_j - [**R**]{}\_k).
The path integral can be expressed as a sum over saddle point contribution in which the contribution of paths in the neighborhood of each classical path is evaluated by expanding the action to quadratic order in $ R - R_c $. The partition function $ Z $ is calculated within the semiclassical approximation. The partition function can be expressed as a sum over classical paths, assuming the vortices to be bosons, $ Z = \sum_{c} D[R_c] e ^{-S[R_c]} $, where $ D[R_c] $ is the fluctuation determinant and $ S[R_c] $ is the action evaluated along the classical path. There are interesting issues[@hal] about the statistics of vortices in a compressible superfluid such as ours, and in the present paper we assume them to be bosons. The partition function $ Z $ can be written by considering only the leading order contribution as [@kiv] $ Z = Z_0 \sum_c \tilde{D}[R_c] e^{-S_0[R_c]}$, where $ {S} = S_0 + \tilde S $. Let us consider the contribution of a single large exchange ring to $ Z $. The real part of the action would be $ \alpha_0 L $ (see the sec. V), where $ L $ is the number of vortices in the ring and $ \alpha_0 $ is independent of path. The fluctuation determinant [@kiv] is $ \tilde{D}[R_c] \sim exp[-\delta \alpha L + O (\ln L)]$. Here, $ \delta \alpha $ is a real constant which renormalize $ \alpha_0$. The imaginary part, the phase change as a cooperative motion along a ring is $ \theta = \oint e {\bf A}_{\rm eff} . d{\bf l} = 2 \pi N $, where $ N $ is the number of the superfluid particles enclosed by the ring [@hal]. This is the analog of the Bohm-Aharanov phase factor for a charged particle moving in a magnetic field. So the partition function becomes, $ Z \sim exp[- \alpha L \pm i 2\pi N ]$, where $ \alpha = \alpha_0 + \delta \alpha $.
Cooperative ring exchange mechanism
===================================
How does the vortex lattice melt? To understand the melting of the vortex lattices, the Lindemann criteria can not be used here since it is used in the melting of classical solids. The vortices are not executing almost independent thermal motions as in a classical solid. The dynamics of the present problem is governed by a Hamiltonian with only first-order time derivatives, which give rise to its own peculiar properties. If we consider a rigid Wigner solid and allow one ring of vortices to tunnel coherently they see a periodic potential with the periodicity of the lattice (Fig 1). If we observe the coherent motion of one chain over a long time compared to the tunneling time ($ \tau_0 $), the potential that it sees will not be periodic. The physically important rings being one-dimensional and long, this can result in the destruction of the long-range order along the chain rather easily. This in turn will feed back and affect the rest of the neighborhood, resulting possibly in a molten state. This will also result in the path of the wave packets of vortices being displaced away from the edges of the triangle of the lattice. This means that the self-consistent potential seen by a vortex no longer has a component which has long-range order.
calculation of the tunneling coefficient
========================================
The numerical value of the tunneling coefficient $ \alpha (\nu_v ) $ determines whether the vortices form the liquid state or Wigner crystal. To estimate this tunneling coefficient we consider the following simple exchange path which is shown in figure 1. Consider the path in which one row of vortices exchanges one step in the $ X $ direction in the background of the static potential of all other vortices, $ X_i(\beta) = X_i(0) + d $ and $ Y_i(\beta) = Y_i(0) $, where $ d = \sqrt{\frac{4 \pi }{\sqrt{3} \nu_v}} l_0 $ is the lattice constant of the Wigner crystal for a given density $ \nu_v $. There is no net phase changes since this straight path does not enclose any area. We are imposing the periodic boundary conditions in the $ X $ direction, $ X_i(\tau) = X_{i+ L}(\tau) $.
For $ |Y_j| \ll d $, the two-body interaction potential in the coherent state representation (given in the eq. \[pot\]) can be approximated by
& = & { \_[j = 1]{}\^[L]{}\
& + & \_[j>k]{}\[K\_x(j-k)(X\_j - X\_k)\^2 + K\_y(j-k) (Y\_j - Y\_k)\^2\] }, where $ X_j'$s and $ Y_j'$s are in units of the lattice constant $ d $. Here, $ K_x(j-k) = \frac{\partial^2V_{jk}}{\partial X_j \partial X_k}|_{R_C} $ and $ K_y(j-k) = \frac{\partial^2V_{jk}}{\partial Y_j \partial Y_k}|_{R_C} $ are evaluated along the classical paths. The best fit to the actual potential is obtained with $ Q_x / Q_y \sim 0.6 $ for $ \nu_v \sim 1/2 $ and $ Q_x $, $ Q_y $ and $ K_x(j) $, $ K_y(j) $ are weakly dependent on $ \nu_v $. The calculation of the fitting parameters $ Q_x $ and $ Q_y $ is given in the Appendix. Notice that $ Q_x/Q_y < 1 $ implies that when one-dimensional chain moves coherently in the $ X $ direction, the potential barrier is much less than in the $ X $ direction compared to that of the $ Y $ direction.
The dimensionless Euclidean action is given by S & = & \_[0]{}\^ d . Since $ S $ is a quadratic form in $ Y_j $, the motion in the $ Y $ direction can be integrated out exactly. After doing the $ Y_j $ integration, we get an effective action $ S_{\rm eff} $ for the $ X $ motion, with a quadratic kinetic energy,
S\_[eff]{} & = & \_[0]{}\^ d \[ \_[j<k]{}\[ M(j-k)\
& + & \[K\_x(j-k)(\_j - \_k)\^2\] + \_j (1-(\_j))\], where $ \tau $ is the imaginary time variable, $( M(j-k))^{-1} = \frac{1}{8} [ (\frac{Q_y}{2} + \sum_j K_y(j)
\delta_{jk})- \sum_{j<k} K_y(j-k)] $ and $ \phi_j = 2 \pi X_j $. $ S_{\rm eff} $ is the effective action for a one-dimensional sine-Gordan chain. The classical path satisfying the boundary conditions are $ \phi_j (0)=0 $ and $ \phi_{j}(\beta) = 2 \pi $, corresponds to the simultaneous coherent motion of all the vortices, i.e. $ \phi_j(\tau) = \phi_{0} (\tau) + 2\pi j $. Due to the simultaneous coherent motion of all the vortices, the above effective action becomes, S\_[ eff]{} = d \_[i]{}\^[L]{} . By using the Euler-Lagrange equation of motion, one can calculate the $ \dot {\phi}$. Hence, the above effective action along the classical path becomes, $ S[R_c] = \alpha_0 L $, where $ \alpha_0(\nu_v) = \frac{4}{\sqrt{3}\pi \nu_v} \sqrt{\frac{Q_x}{Q_y}} $. The $ \alpha_0 $ is independent of $ K's $. To evaluate the fluctuation determinant we have to take the continuum limit of the effective action $ S_{\rm eff}$. To take a continuum limit of the effective action, $( \phi_j - \phi_k)$ is replaced by $ (j-k) \partial_x \phi $, but $\sum_j j^2 K_x(j)$ is diverging linearly since $ K_x(j) \sim 1/j^2 $. This is an infrared divergence and the continuum model must be constructed by taking the upper cut-off limit carefully. Here, we do not calculate the $ \delta \alpha $ which is a non-trivial task.
Kivelson [*et al*]{}. [@kiv] has given an extensive discussion of how to map Wigner crystal of electrons in a magnetic field into the discrete Gaussian model. Following the ref. [@kiv], one can map the sum over all classical paths to a sun over classical spin configurations. All the contributions of ring exchanges happening in a time interval $ \tau_0 $ are summed by modeling the change in the action by a discrete Gaussian model in an imaginary field [@chu], H\_[DG]{} = \_0(\_v) \_[<,>]{} (S\_ - S\_)\^2 + i h(\_v) \_ S\_, where $ <\lambda,\gamma> $ denotes a nearest-neighbor pair on the dual lattice and $ S_{\lambda} $ is an integer variable associated to every triangle in the lattice. $ S_{\lambda} $ counts the number of clockwise minus counterclockwise ring exchanges that surround a plaquette $ \lambda $. The function $ \alpha_0(\nu_v) $ is a measure of the tunneling barrier. The function $ h(\nu_v)$ is the phase factor which arises as a result of the pseudo magnetic flux enclosed by the exchange rings. This model is known to have a phase transition [@chu] at a critical value of $ \alpha = \alpha_c(\nu_v) \sim 1.1 $ [@kiv]. For $ \alpha (\nu_v) > \alpha_c (\nu_v) $, the ground state is a vortex Wigner crystal and for $ \alpha (\nu_v) < \alpha_c (\nu_v)$, the ground state is a quantum mechanical vortex liquid state. In our calculation we find that the quantum melting will occur at $ \nu_v \sim \frac{1}{2} $. The current experiments [@mad; @abo] with $ \nu_v \ll \frac{1}{2} $ are in the regimes of vortex lattice ground state. So our result is consistent with the experimental results, but it does not match very well with the other theoretical results [@cooper; @mac]. The numerical calculation is based on the small number of atomic bosons as well as small number of vortices. In our approach we assumed a large number of atomic bosons and vortices. We believe that this discrepancies is related to the system size. One would say that a large system has been considered in the ref. [@mac] and calculated the melting condition which is comparable to the numerical result [@cooper]. In the ref. [@mac], first they have calculated the root mean square of the displacement from the equilibrium position of a vortex in terms of the filling factor $\nu_v$. Then they have used the Lindemenn criterion and assumed that the melting will occur when the fluctuation of the vortex position is $ 0.15 d $ to get the melting condition which is close to the numerical result [@cooper]. Although the Lindemann criterion gives a reasonable description of the melting of a classical solid, there is little evidence that it can be applied to the melting of a vortex lattice. The vortices are intrinsically quantum objects whose equation of motion are quite different from those of atoms in a harmonic crystal.
summary
=======
In this paper, we treated the vortices as a new degrees of freedom and considered a model Hamiltonian of interacting vortices. Later, we assumed the vortices are in the lowest Landau level due to the low mass of the vortices and the high densities of the superfluid Bose particles. The concept of cooperative ring exchange is introduced to explain the mechanism of quantum melting of the Wigner crystals. Finally, we estimated the tunneling coefficient which determines the condition for quantum melting instability of the vortex lattices. Latest experiments [@mad] with $ N \sim 10^5 $, $ N_v \sim 10 $ ($ \nu_v \sim 10^{-4} $) and [@abo] with $ N \sim 10^{7} $, $ N_v \sim 100 $ ($ \nu_v \sim 10^{-5}$) are in the regime in which the ground state is a vortex lattice. It is a challenge for experimentalists to produce a vortex liquid state in a rotating Bose condensed state.
Our present work, resulting in a discrete Gaussian model (equation 10) predicts Laughlin like even denominator bosonic vortex filling fraction $\nu_v = \frac{1}{2}$, to emerge on quantum melting. We can also determine the asymptotic form of the wave functions[@dhlee]. Along with a rich phase structure the discrete Gaussian model also determines the nature of the quantum melting transition. To the extent the vortex degrees of freedom retain their identity, the results of CRE approach may remain valid in the quantum melted region. This needs to be investigated further.
As mentioned earlier CRE processes should leave its finger print as specific fluctuation patterns (figure 1) that preempts quantum melting. It should be interesting to look for snapshots of such displaced large rings in the actual vortex lattice imaging.
Calculation of the parameters $Q_x$ and $Q_y$
=============================================
Here, we describe how to calculate the parameters $ Q_x $ and $ Q_y $. We consider the simplest possible exchange path, namely one line of vortices shifting coherently within the Wigner crystal. When the line $ \cal L $ is displaced, we have $ {\bf R}_i = {\bf T}_i + {\bf d} \delta_{i \in \cal L}$, with $ \delta_{i \in \cal L} $ unity if and only if lattice site $ i $ lies on the line in question. The matrix element of the potential between two vortices in coherent basis state is V( [**R**]{}) = \_[[**R**]{}]{}([**r**]{}) | V([**r**]{})| \_[[**R**]{}]{}([**r**]{}). Accordingly, the energy of the displaced line configuration relative to that of the perfect Wigner crystal is E = \_[i,j ]{} . This sum can be broken up into three terms. The first term includes all pairs $(i,j)$ in which both sites $ i $ and $ j $ lie off the line. This contribution to $ \Delta E $ is zero. The second term involves all pairs $(i,j)$ where one of the sites, say $ i $, is on the line and the other, $ j $, is off the line: E\_2 = \_[i L, j L ]{} . Clearly the line energy is extensive, hence the energy per tunneling of the vortex can be written U([**d**]{}) = E\_2 / L = \_[j L ]{} , where we have chosen the origin to lie on the line. The third and final term is that arising from both $ i $ and $ j $ on the line. Since the tunneling is cooperative, this contribution to the classical action vanishes.
By allowing one line of vortices to tunnel coherently along the line, one can fit the change in energy $\Delta E$ into a periodic potential with the appropriate choice of the parameter $ Q_x $. On the other hand, by allowing one line of vortices to tunnel coherently perpendicular to the line, one can fit the change in energy into a quadratic potential with appropriate choice of the parameter $ Q_y$.
K. W. Madison, F. Chevy, W. Wohleben, and J. Dalibard, Phys. Rev. Lett. [**84**]{}, 806 (2000).
J. R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ketterle, Science [**292**]{}, 476 (2001).
P. Engels, I. Coddington, P. C. Haljan, V. Schweikhard, and E. A. Cornell, Phys. Rev. Lett. [**90**]{}, 170405 (2003).
N. K. Wilkin and J. M. F. Gunn, Phys. Rev. Lett. [**84**]{}, 6 (2000); S. Viefers, T. H. Hansson and S. M. Reimann, Phys. Rev. A [**62**]{}, 053604 (2000); Tin-Lun Ho, Phys. Rev. Lett. [**87**]{}, 060403 (2001).
Ph. Choquard, and J. Clerouin, Phys. Rev. Lett. [**50**]{}, 2086 (1983).
Q. Niu, P. Ao, and D. J. Thouless, Phys. Rev. Lett. [**72**]{}, 1706 (1994).
R. J. Donnelly, [*Quantized Vortices in Helium II*]{}, (Cambridge, 1991).
S. Kivelson, C. Kallin, D. P. Arovas, and J. R. Schrieffer, Phys. Rev. Lett. [**56**]{}, 873 (1986); [*ibid*]{}, Phys. Rev. B [**36**]{}, 1620 (1987).
N. R. Cooper, N. K. Wilkin, J. M. F. Gunn, Phys. Rev. Lett. [**87**]{}, 120405 (2001).
Jairo Sinova, C.B. Hanna, A. H. MacDonald, Phys. Rev. Lett. [**89**]{}, 030403 (2002).
Tin-Lun Ho and Michael Ma, J. Low Temp. Phys. [**115**]{}, 61 (1999).
L. Schulman, [*Techniques and Applications of Path Integration*]{}, (Wiley New York, 1981).
S. M. Girvin and T. Jach, Phys. Rev. B [**29**]{}, 5617 (1984).
R. Y. Chiao, A. Hansen and A. A. Moulthrop, Phys. Rev. Lett. [**54**]{}, 1339 (1985); F. D. M. Haldane, and Y. S. Wu, Phys. Rev. Lett. [**55**]{}, 2887 (1985).
S. T. Chui and J. D. Weeks, Phys. Rev. B [**14**]{}, 4978 (1976); W.Y. Shih and D. Stroud, Phys. Rev. B [**32**]{}, 158 (1985)
Dung-Hai Lee, G. Baskaran and S. A. Kivelson, Phys. Rev. Lett. [**59**]{}, 2467 (1987).
|
---
abstract: 'An analysis of the Type Ic supernova (SN) 2004aw is performed by means of models of the photospheric and nebular spectra and of the bolometric light curve. SN2004aw is shown not to be “broad-lined”, contrary to previous claims, but rather a “fast-lined” SNIc. The spectral resemblance to the narrow-lined Type Ic SN1994I, combined with the strong nebular \[\] emission and the broad light curve, point to a moderately energetic explosion of a massive C+O star. The ejected $^{56}$Ni mass is $\approx 0.20$[M$_{\odot}$]{}. The ejecta mass as constrained by the models is $\sim 3-5$[M$_{\odot}$]{}, while the kinetic energy is estimated as [$E_{\rm K}$]{}$\sim 3-6 \times 10^{51}$ ergs. The ratio [$E_{\rm K}$]{}/[$M_{\textrm{ej}}$]{}, the specific energy which influences the shape of the spectrum, is therefore $\approx 1$. The corresponding zero-age main-sequence mass of the progenitor star may have been $\sim 23-28$[M$_{\odot}$]{}. Tests show that a flatter outer density structure may have caused a broad-lined spectrum at epochs before those observed without affecting the later epochs when data are available, implying that our estimate of [$E_{\rm K}$]{} is a lower limit. SN2004aw may have been powered by either a collapsar or a magnetar, both of which have been proposed for gamma-ray burst-supernovae. Evidence for this is seen in the innermost layers, which appear to be highly aspherical as suggested by the nebular line profiles. However, any engine was not extremely powerful, as the outer ejecta are more consistent with a spherical explosion and no gamma-ray burst was detected in coincidence with SN2004aw.'
author:
- |
P. A. Mazzali,$^{1,2}$ [^1], D. N. Sauer$^{3}$, E. Pian$^{4,5}$, J. Deng$^{6}$, S. Prentice$^{1}$, S. Ben Ami$^{7}$ S. Taubenberger$^{2,8}$ and K. Nomoto$^{9}$\
\
$^{1}$Astrophysics Research Institute, Liverpool John Moores University, IC2, 134 Brownlow Hill, Liverpool L3 5RF, United Kingdom\
$^{2}$Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching bei München, Germany\
$^{3}$German Aerospace Center (DLR), Institute of Atmospheric Physics, 82234 Oberpfaffenhofen, Germany\
$^{4}$INAF IASF-Bo, via Gobetti 101, 40129 Bologna, Italy\
$^{5}$Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56126 Pisa, Italy\
$^{6}$National Astronomical Observatories, CAS, 20A Datun Road, Chaoyang District, Beijing 100012, China\
$^{7}$Smithsonian Astrophysical Observatory, 60 Garden St., Cambridge MA-02138, USA\
$^{8}$European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany\
$^{9}$IPMU, Kashiwa, 277-8583, Japan
date: 'Accepted ... Received ...; in original form ...'
title: 'Modelling the Type Ic SN 2004aw: a Moderately Energetic Explosion of a Massive C+O Star without a GRB'
---
\[firstpage\]
radiative transfer – line: formation — line: identification — supernovae: general — supernovae: individual: SN2004aw) — gamma ray bursts: general
Introduction {#sec:intro}
============
Type Ic supernovae (SNe) are H-/He-poor core-collapse supernovae [@clow97; @fil97; @Matheson2001; @Modjaz2016]. Significant diversity can be found among the He-poor SNe whose data have been published [@Bianco2014; @Taddia2015; @Lyman2016; @Prentice2016]. For example, models of the gamma-ray burst (GRB) SN1998bw [@iwa98] indicate that it ejected [$M_{\textrm{ej}}$]{}$\sim 10$[M$_{\odot}$]{} of material with kinetic energy [$E_{\rm K}$]{}$\sim 4\times
10^{52}$ erg, and synthesised $\sim 0.4$[M$_{\odot}$]{} of [$^{56}$Ni]{}, which powered the light curve. These results suggested a progenitor of $\sim 40$[M$_{\odot}$]{}. In contrast, SN1994I [@ric96] was less luminous and energetic. Radiation transport models [@sau06] show that it ejected only $\sim 1$[M$_{\odot}$]{} of material with [$E_{\rm K}$]{}$\sim 10^{51}$erg. Both the [$E_{\rm K}$]{} and the mass of [$^{56}$Ni]{}synthesised by SN1994I ($M_\mathrm{Ni}\sim0.08$[M$_{\odot}$]{}) are similar to ordinary SNeIIP, which suggests a progenitor mass of $\sim 15$[M$_{\odot}$]{}. A star of this mass must undergo severe envelope stripping to result in anything other than a SNIIP. @hachinger12 showed that small amounts of He and H are sufficient to transform a spectrum from H-/He-poor to H-poor/He-rich and from H-poor/He-rich to H-/He-rich, respectivel. How this mass loss occurs is not fully understood, but strong binary interaction is considered to be the most likely way to strip a progenitor of its outer layers [@nom95; @Eldridge2013].
Given the large range of [$M_{\textrm{ej}}$]{} of SNeIc we can infer that a wide range of progenitor stars can end their lives as SNeIc, depending on their evolution. Many of the best observed SNeIc are luminous, massive, and energetic, and belong to an extreme subclass sometimes called hypernovae. These are characterised by early spectra showing very broad absorption features indicative of very high ejecta velocities [@maz02; @mazzali10]. Some of these SNe have been discovered in coincidence with long-duration gamma-ray bursts (GRBs) [e.g., @gal98; @maz06a; @Ashall2017]. While the collapsar scenario, involving the formation of a black hole and the subsequent accretion on to it of additional material from the stellar core [@mac99; @woo06; @fry07] has enjoyed much success, the fact that all GRB-SNe have [$E_{\rm K}$]{}$\sim 10^{52}$erg, and that the collimation-corrected energy of the associated GRBs is always much smaller than the SN [$E_{\rm K}$]{} has been suggested to indicate that GRB-SNe could be energised by magnetars [@mazzali14; @Greiner2015; @Ashall2017]. SN2006aj, which was associated with an X-ray flash, was modelled as the explosion of a star of $M_{\rm ZAMS}\sim 20$ M$_{\sun}$, and a magnetar was claimed to have energised the explosion also in this case [@maz06b]. Magnetars may therefore be responsible for most, perhaps all SNe associated with relativistic events, which would be in keeping with the original suggestion that the collapsar is a failed SN [@mac99]. We also know SNe which have a high-velocity, high-energy tail to their ejecta, which causes broad absorption lines, but were not associated with a GRB [[e.g. ]{}SN1997ef, @maz00b]. In some cases the mass and energies involved may have been too small [[e.g. ]{}SN2002ap, @maz02]; in other cases the orientation of the event may have been unfavourable for the detection of a possible GRB [[e.g. ]{}SN2003jd, @val08; @maz05].
Extracting SN properties requires good observational data as well as explicit modelling of these data (spectra and light curves), which implies some amount of work and has been done for only a very small number of SNe. Although significant uncertainties affect the estimates of mass and energy obtained even with this method, @mazzali13 show that properties derived using scaling arguments can easily lead to much larger uncertainty, while properties derived using simplistic approaches are not to be trusted at all. It is therefore important not only to extract the properties of the most extreme SNe, but to map the entire range of [$M_{\textrm{ej}}$]{}, [$E_{\rm K}$]{}, and [$^{56}$Ni]{} mass in order to understand how SNeIc are produced and what is the relation to other types of stripped-envelope SNe, which may show different distributions of properties [[e.g. ]{} @Prentice2017a].
SN2004aw is an interesting SN to study because it appears to be an intermediate case and it was very well observed [@tau06]. Under the classification scheme presented in @Prentice2017a it is classified as a SNIc-6. It was spectroscopically similar to SN1994I and PTF12gzk [@benami2012], which was characterised by high line velocities ($>20,000$ [kms$^{-1}$]{}). SN2004aw is intermediate between these two SNe in velocity. In Figure \[fig:cf94I04aw\] we compare near-maximum spectra of the three SNe. Despite the large velocity difference between the spectra, it is apparent that none of them show the extreme line blending that is typical of broad-lined SNeIc. The light curve of SN2004aw is somewhat broader than that of SN1994I, and depending on the reddening that is assumed it may be also be more luminous. SN2004aw may therefore represent an intermediate type of event, and as such it provides a good opportunity to further our understanding of He-poor SNe.
![Spectra of SN2004aw (2004 Mar 24, 1 day after $B$-band maximum), 1994I (1994 Apr 4, 4 days before $B$-band maximum) and PTF12gzk (2012 Aug 2, 2 days before $B$-band maximum), showing the close similarity in spectral shape and line width and the shift in line velocity. PTF12gzk has the highest velocities, and SN1994I has the lowest, especially if the early epoch of the spectrum shown here is taken into account.[]{data-label="fig:cf94I04aw"}](Fig1_speccomp_04aw_94_12gzk.eps){width="88mm"}
In this paper we describe our one-dimensional (1-D) models for the photospheric and nebular spectra of SN2004aw, and for the bolometric light curve. Using our 1-D radiative transfer codes, we constrain the ejected mass, kinetic energy, and ejected $^{56}$Ni mass of SN2004aw, which are then compared with those of other Type Ic SNe.
We analysed SN2004aw in the spirit of abundance tomography [@mazzali14]. In Section 2 we describe our models for the early-time spectra, and discuss how they were used to establish a density distribution. We then proceed to the nebular spectra, and show how these can be used to determine the mass and density of the ejecta at low velocities, as well as their composition, in particular with respect to the amount and distribution of [$^{56}$Ni]{}. We also show how the profile of the emission lines can be used to optimise the model (Section 3). We then use the model of the ejecta obtained through spectral modelling, including the distribution of [$^{56}$Ni]{}, to compute a synthetic bolometric light curve, which we compare to the bolometric light curve of SN2004aw constructed from the available photometry (Section 4). Finally, in Section 5 we discuss our results, and place SN2004aw in the context of type Ic SNe.
Following @tau06, a distance modulus of $\mu=34.17$ for the host galaxy, NGC 3997, was used. A combined Galactic plus local reddening of $E(B-V)=0.37$ is assumed throughout this paper.
Models for the Photospheric Spectra {#sec:early}
===================================
We modelled a series of photospheric spectra of SN2004aw using our Montecarlo SN spectrum synthesis code. Spectra with epoch between 1 and 28 days after $B$ maximum were selected from @tau06.
Our method is based on a Monte-Carlo solution of the line transfer that was developed by @abb85, @maz93, @luc99, and @maz00a. The models assume an inner boundary (“photosphere”) of velocity $v_{\rm
ph}(t)$ for an epoch $t$ relative to the explosion where all radiation is emitted as a blackbody continuum of temperature $T_{\rm BB}$. This is generally a good assumption for early epochs, and typically works until epochs of $3-4$ weeks after maximum for oxygen-dominated SNIc ejecta. The ejecta are described by a density distribution (an explosion model) and by a composition, which can vary with depth. The gas in the ejecta and the radiation field is assumed to be in radiative equilibrium. Photon energy packets are followed through the ejecta, where they undergo scattering events. For line scattering, a branching scheme based on transition probabilities in the Sobolev-approximation allows photons to be emitted in transitions different from those in which they were absorbed. In order to account for the energy of photons that are scattered back into the photosphere (line blanketing) $T_{\rm BB}$ is adjusted in an iterative procedure to obtain the desired emergent luminosity, $L$. The emergent spectrum is obtained from a formal solution of the transfer equation using the source functions that are derived from the Monte-Carlo simulation. The code has been used for a number of SNeIc [[e.g. ]{} @mazzali13].
Early-time spectra probe the outer part of the ejecta, and can be used to establish both the abundance and the density distribution. In the case of SN2004aw the density structure above $5000\,$[[kms$^{-1}$]{}]{} was modelled based on the 1-D hydrodynamical model CO21, which was developed for SN1994I [@iwa94]. The density was scaled up by a factor of $3$ in order to achieve enough optical depth in the lines. This approach is justified by the overall spectral similarity between SN2004aw and SN1994I at similar photospheric epochs \[fig:cf94I04aw\]and by the good match of our model spectra to the observed ones (see Figure \[fig:spectra\]). This simplistic approach is necessary because of the lack of a grid of explosion models at different masses and energies. On the other hand, rescaling is justified because CO cores of massive stars tend to have self-similar density profiles even if they differ in mass (G. Meynet, priv. comm.). This rescaling can lead to some additional uncertainty, so we are very generous with our error estimates below. Homologous expansion is assumed at all times, so the ejecta can be conveniently described using velocity as a comoving coordinate.
As the supernova ejecta expand the line-forming region of the supernova recedes to progressively lower velocities. Subsequent epochs are therefore modelled by adding shells below the previous inner boundary while retaining the density and composition of the layers described by models of earlier epochs, following the approach called “abundance tomography” [@ste05].
Our models require a time from explosion so that the density structure can be appropriately rescaled. SN2004aw was discovered late, only 5 days before $B$ maximum, so a direct determination of the epoch of explosion is not possible. @tau06 showed that the light curves of SN2004aw are broader than those of SN2002ap, whose risetime was determined through modelling to be $\sim 10$ days [@maz02]. SN2004aw reached $V$ maximum $\sim 3$ days after $B$ maximum [@tau06]. Bolometric maximum was intermediate between these two (see Section \[sec:lc\]). We adopted for SN2004aw a rise-time to $B$ maximum of 14 days. Epochs used in the models are shown in Figure \[fig:spectra\] and listed in Table \[tab:para\] along with other modelling parameters. We did not correct for time-dilation caused by the small redshift $(cz = 4906\,$[kms$^{-1}$]{}) since the uncertainty in the epoch (at least $\pm 1$ day) is larger than the correction.
![Model spectra (black lines) compared to the respective observed spectra of SN2004aw [@tau06] at different epochs (gray lines). The line labels refer to the most prominent ions causing the respective absorption features. The epochs refer to the time after the assumed explosion date. []{data-label="fig:spectra"}](Fig2_04aw_combined_annotated.eps){width="86mm"}
Figure \[fig:spectra\] shows our best-fitting model spectra ([*black lines*]{}) compared to the observed spectra ([*gray lines*]{}). This ejecta model has [$\sim$]{}$3$ M$_{\sun}$ of material above 5000 [kms$^{-1}$]{}. We also tested ejecta models with different mass in an effort to break the parameter degeneracy that affects the light curve modelling. We found that the SN2004aw spectra can be reproduced reasonably well only if the density scaling factor of CO21 is between $\sim$ 2 and $\sim$ 4. Models with larger mass have excessively high line velocities which cause the absorption features to appear too broad. In particular, the oxygen line at $\sim7400\,$[Å]{} becomes significantly too broad if too much material is present at high velocities. This line is highly saturated and therefore fairly insensitive to the O abundance within reasonable limits, meaning that the strength and shape of the absorption feature is primarily set by the density structure. The lower limit for the mass above any photosphere is constrained by the amount of material needed to generate notable P-Cygni features.
In the following paragraphs we discuss the properties of the individual models in more detail. The primary input parameters for all early-time spectral models are summarized in Table \[tab:para\].
------------- ------- ---------------------- -------------- --------------------
Date $t$ $L$ $v_{\rm ph}$ $T_{\rm BB}$
\[d\] \[erg/s\] \[km/s\] \[K\]
24 Mar 2004 15 $4.04\times 10^{42}$ 11100 $8.9 \times10^{3}$
29 Mar 2004 20 $3.65\times 10^{42}$ 9600 $8.0 \times10^{3}$
07 Apr 2004 29 $2.97\times 10^{42}$ 8100 $6.6 \times10^{3}$
14 Apr 2004 36 $2.30\times 10^{42}$ 6600 $5.9 \times10^{3}$
21 Apr 2004 43 $1.91\times 10^{42}$ 5000 $6.0 \times10^{3}$
------------- ------- ---------------------- -------------- --------------------
: Model parameters for the early-time models[]{data-label="tab:para"}
The first spectrum that we modelled is from March 24, 2004, $1\,$day after $B$-band maximum and $t=15\,$d after explosion. Fig. \[fig:spectra\]a shows a comparison of the model spectrum to the observed one. The model with $L=4.04\times10^{42}\,$[[ergs$^{-1}$]{}]{} and $v_{\rm ph}=11\,100\,$[[kms$^{-1}$]{}]{} reproduces most of the observed features although the re-emission features are not strong enough in some places. All absorptions, the H&K feature at [$\sim$]{}$3750\,$[Å]{} and the IR triplet at [$\sim$]{}$8100\,$[Å]{}, are somewhat weaker in the model than in the observation. To fit these features, however, a much higher Ca abundance would be required which cannot be accommodated by the later spectra. Therefore it is likely that the model at this epoch overestimates the ionization, resulting in too much at the expense of . The composition used to model this spectrum includes equal parts of O and C ([$\sim$]{}$45$ per cent by mass), $7$ per cent Ne, a total of $1.3$ per cent intermediate-mass elements (Mg, Si, S, Ar) including $4\times10^{-4}$ per cent Ca. Additionally, we use $0.062$ per cent Fe-group elements consisting of $0.05$ per cent [[$^{56}\rm{Ni}$]{}]{}, $0.012$ per cent “stable” Fe (i.e., Fe not produced via the [[$^{56}\rm{Ni}$]{}]{} decay chain) and traces of Ti and Cr.
Figure \[fig:spectra\]b shows the spectrum on March 29, 2004, compared to the model spectrum. The epoch, $6\,$days after maximum light, corresponds to $t=20\,$d after the assumed explosion date. The model requires a luminosity of $L= 3.65\times10^{42}\,$[ergs$^{-1}$]{}. The pseudo-photosphere is located at $v_{\rm
ph}=9600\,$[[kms$^{-1}$]{}]{} with a temperature of $T_{\rm BB}=8060\,$K for the underlying blackbody. The composition is similar to the previous model although less stable Fe is needed to match the Fe features because more [[$^{56}\rm{Ni}$]{}]{} has decayed to Fe.
The next epoch we modeled is 2004 April 7, 15 days after maximum light and $t=29\,$d after explosion (Fig. \[fig:spectra\]c). The luminosity used in this model is $L=2.97\times10^{42}\,$[ergs$^{-1}$]{} at a photospheric velocity of $v_{\rm
ph}=8100\,$[[kms$^{-1}$]{}]{}. The resulting temperature of the photosphere is $T_{\rm
BB}=6640$ K. The shell near the photosphere contains still [$\sim$]{}$84$ per cent C and O but somewhat more intermediate-mass elements, and $1.1$ per cent [[$^{56}\rm{Ni}$]{}]{}. This spectrum appears to be much redder than the earlier ones. In addition to a lower temperature, blocking by a large number of iron group lines suppresses the UV and blue flux in this and later spectra. The fit to the observed spectrum is acceptable although the double-peaked re-emission features between $5400$ and $6700\,$[Å]{} are too weak in the model. At this and the subsequent epochs the models also fail to reproduce the absorption features around $6800$ and $7200\,$[Å]{}. Based on the models it cannot be uniquely asserted if this discrepancy is due to missing elements in the composition or if the ionization balance is incorrectly determined. The assumption of the thermal photosphere absorption additionally overestimates the flux in this wavelength region.
The following spectrum (Fig. \[fig:spectra\]d) was taken on 2004 April 14, 22 days after maximum light and $t=36\,$d after explosion. The luminosity used here is $L=2.30\times10^{42}\,$[ergs$^{-1}$]{}. The photospheric velocity is $v_{\rm
ph}=6600\,$[[kms$^{-1}$]{}]{}, which leads to a blackbody temperature of $T_{\rm
BB}=5950\,$K. The composition used for this fit is still very similar to the previous epoch, with a slightly higher [[$^{56}\rm{Ni}$]{}]{} mass fraction (1.6 per cent). The feature at $7400\,$[Å]{} appears stronger in the model than in the observation. Unfortunately, the observed spectra show an atmospheric absorption feature at $7500\,$[Å]{} which happens to coincide with the feature. We decided to use the observed spectrum in which this feature has not been removed because it makes the uncertainty in the shape of the absorption more apparent.
The last epoch we modelled with the photospheric method (Fig. \[fig:spectra\]e) is 20 April 2004, 28 days after maximum light and $t=43\,$d after explosion. The model requires a change in the abundance pattern. C and O are reduced to $10$ and $35$ per cent, respectively, Si is increased to $15$ per cent and we require $18$ per cent of [[$^{56}\rm{Ni}$]{}]{}. The inner boundary of this model is located at $v_{\rm ph}=5000\,$[[kms$^{-1}$]{}]{}. The overall shape of the spectrum is well reproduced although the velocity of some absorptions is underestimated by the model, indicating that the change in composition may actually occur at somewhat higher velocities than assumed here. The uncertainty in the shape of the density structure in this transition region, however, makes it difficult to find a better match. To improve the fit we used an intermediate shell at $5800\,$[[kms$^{-1}$]{}]{} which allows us to smoothen the transition somewhat, although the lack of an observed spectrum between Apr 14 and 21 leaves this shell relatively unconstrained.
Models for the nebular spectra {#sec:neb}
==============================
More information about the properties of the deeper layers of the ejecta can be obtained from nebular-epoch spectra. At late phases, when the optical depth of the ejecta has dropped to below 1, the innermost ejecta produce emission lines, whose strengths and profiles can shed light on the core-collapse process.
The nebular spectra of Type Ic SNe are usually dominated by a strong \[\] $\lambda\lambda$6300, 6364 line, and SN2004aw is no exception. In the case of the GRB/SN1998bw, the nebular spectra were used to infer the aspherical distribution of different elements in the ejecta and hence the aspherical nature of the explosion. The narrow \[\] $\lambda\lambda$6300, 6364 line and the contrasting broad \[\] emission suggested a polar-type explosion viewed near the axis [@maz01; @mae02]. For SN2003jd, which was luminous but did not show a GRB, the double-peaked profile of the \[\] line witnesses the disc-like distribution of oxygen in an explosion similar to that of SN1998bw but viewed closer to the equatorial plane [@maz05].
In contrast, other SNeIc show no sign of asphericity in their late-time profiles. Examples include a low-energy SN such as SN1994I [@sau06] and a narrow-lined SN with higher energy such as SN2006aj [@mazzali07a].
The \[\] $\lambda\lambda$6300, 6364 emission profile in SN20004aw is remarkably similar to that of SN1998bw (Figure \[fig:cf04aw98bw02ap\]), indicating that SN2004aw must have been significantly aspherical. A detailed comparison shows that SN2004aw has a broader emission base, and the narrow core that characterised SN1998bw emerges only at low velocities. This suggests a similar morphology for the two SNe, but more extreme in SN1998bw, and a similar viewing angle. Below we attempt to model the nebular spectrum of SN2004aw in order to define its properties.
![The \[\] $\lambda\lambda$6300, 6364 emission line in the spectrum of SN2004aw, $235\,$d after explosion (blue line) compared to the same line in the spectrum of the GRB/SN1998bw obtained on 21 May 1999, 388 rest-frame days after the explosion (red) and that of SN2002ap obtained on 16 Sept 2002, 229 days after explosion (green). []{data-label="fig:cf04aw98bw02ap"}](Fig3_neb_04aw_98bw_02ap_comp.eps){width="\columnwidth"}
Only the first nebular spectrum of SN2004aw, obtained on 4 Nov 2004, $\sim
235$ days after maximum, has sufficiently high signal-to-noise ratio that several lines can be detected. The spectrum is shown in Figure \[fig:neb\] along with synthetic models. The emission lines that can be identified are, from blue to red: \] $\lambda$4571; \[\] multiplets near 5200Å, the strength of which is related to the peak luminosity of the SN; a weak D line at 5890Å; the very strong \[\] $\lambda\lambda$6300, 6364 emission; the strong line near 7250Å, which contains both \[\] $\lambda\lambda$7291, 7320 and \[\] lines; a weaker IR triplet near 8600Å, which also contains some .
We modelled the spectrum using a code [@maz01] that computes $\gamma$-ray and positron deposition following the decay of $^{56}$Ni to $^{56}$Co and hence to $^{56}$Fe, and then balances the collisional heating induced by the thermalisation of the fast particles produced by the deposition with cooling via line emission in non-LTE, following the prescriptions of @axe80. The code can be used in two modes, as discussed in @mazzali07a: a one-zone version simply assumes a nebula of constant density, with a sharp outer boundary at a velocity the value of which is selected at input, as are the mass of the nebula and the elemental abundances within it. In a more sophisticated version the code still assumes spherical symmetry but allows for a radial variation of density and composition, and does not require an outer boundary. In this version the diffusion of $\gamma$-rays is followed with a Montecarlo method [@capp97]. This version is useful to test the prediction of explosion models as well as to investigate the radial distribution of mass and abundances in detail when the observed line profiles are sufficiently well determined.
In order to determine the global properties of the nebular spectrum of SN2004aw we began by constructing a first, coarse model using the one-zone approach, adjusting the abundances to fit the shape of the prominent emission features. The synthetic spectrum we computed is indicated by a blue dashed line in Figure \[fig:neb\], where it is compared to the observed spectrum (light grey). The darker line represents the observed spectrum that has been smoothed to emphasise emission features especially in the blue. Assuming a typical rise time of 14 days, we used an epoch of 250 days after explosion for the calculation. The outer velocity adopted in the calculation that gives a reasonable fit to most emission lines is $5000\,$[kms$^{-1}$]{}. This shows that the nebular spectra originate in a region deeper than the innermost layers studied by means of early-phase spectroscopy, but the separation between the two regions is small. The mass contained within $5000\,$[kms$^{-1}$]{} in the model is $\sim
1.8$[M$_{\odot}$]{}. The dominant element is oxygen (1.3[M$_{\odot}$]{}). The mass of [$^{56}\rm{Ni}$]{}required to reproduce the lines and at the same time to excite all other transitions is 0.17[M$_{\odot}$]{}. This is larger than in the low-mass SNIc 1994I, and similar to energetic, broad-lined SNeIc such as 1997ef [@maz00b] or 2006aj [@maz06b]. The carbon mass is small, only 0.2[M$_{\odot}$]{}, as determined by the strength and shape of the -dominated feature near $8500\,$[Å]{}. A small C/O ratio is quite common in SNeIc. The Mg mass is also small, 0.004[M$_{\odot}$]{}, despite the relative strength of \] $4571\,$Å.
The blue dashed line in the blow-up in Fig. \[fig:neb\] shows in detail the \[\] line in the one-zone model. When the observed line profile is viewed in detail it is clear that it has a composite structure. It shows a broad base, which can be described by a symmetric emission with limiting velocity $5000\,$[kms$^{-1}$]{}, and a narrow core, of width $\sim 2000$[kms$^{-1}$]{}, superimposed on it. As we showed above, it is similar to the profile observed in SN1998bw. A similar type of profile was also observed in the BL-SNIc 2002ap, and it may be interpreted as a signature of asphericity [@mazzali07b]. As discussed in that paper, the profile can be produced by the superposition of a narrow emission core and either a broad, flat-topped profile, which can originate in a shell-like distribution of oxygen, or a double-peaked profile such as what is expected from an equatorial distribution of oxygen viewed on or close to the equatorial plane [@maz05]. The shell+core configuration would be globally spherically symmetric, while the disc+core one would be aspherical. It is not possible to distinguish between these two scenarios based only on the profile of the \[\] line. In the case of SN1998bw, the simultaneous observation of broad \[\] emission lines favoured the disc+core solution. In the case of SN2004aw, the Fe lines are not sufficiently well observed for us to be able to determine which scenario is favoured. Still, the similarity of the \[\] profile suggests that the inner parts of SN2004aw behaved like SN1998bw. The \[\] emission profile is SN2002ap was also similar. Interestingly, the \[\] line in SN2002ap was the broadest of the three SNe. The broad emission base suggests that the intermediate-velocity layers ($v \sim 5-10000$[kms$^{-1}$]{}) were more spherical in SN2002ap than in either SN1998bw and 2004aw. The weaker central emission core also indocates that any central density enhancement was also less extreme.
An aspherical distribution of matter requires a detailed model of the explosion. Here we used our stratified code to test the spherically symmetric scenario. We used the density and abundance distributions obtained from the modelling of the early-time spectra at velocities down to $5000\,$[kms$^{-1}$]{}, the velocity of the photosphere of the last of the early-time spectra (20 April 2004). Therefore, we freely adjusted both the density and the abundances below that velocity, trying to optimise the detailed fit of the line profiles, and in particular that of \[OI\] $6300, 6364\,$Å and its narrow core.
![The observed nebular spectrum (light gray line), $235\,$d after explosion compared to the model spectra. The one-zone model is indicated by the blue dashed line, the solid red line represents the improved multi-zone model. For a better comparison we also show a smoothed version of the observed spectrum in dark gray. The inset shows a blow-up of the \[\] $\lambda\lambda$6300, 6364 emission. []{data-label="fig:neb"}](Fig4_2004aw_neb.eps){width="\columnwidth"}
The solid red line in Figure \[fig:neb\] shows the multi-zone synthetic spectrum. The narrow core has a characteristic velocity of $\sim 2000\,$[kms$^{-1}$]{}, and there is a clear discontinuity in line emission at velocities between 2000 and $4000\,$[kms$^{-1}$]{}. In order to reproduce such a feature, it was necessary to introduce a discontinuity in the density profile. The density decreases slightly below $4000\,$[kms$^{-1}$]{}, reaching a minimum at $3000\,$[kms$^{-1}$]{}, and then it increases sharply again in deeper layers. This distribution places a core containing $\approx 0.25$[M$_{\odot}$]{} of material below $3000\,$[kms$^{-1}$]{}. The composition of this core must be dominated by oxygen if the sharp peak of the \[OI\] line is to be reproduced, but it also must contain [$^{56}\rm{Ni}$]{} in order to excite oxygen. The one-zone model already suggests that a large fraction of the [$^{56}\rm{Ni}$]{} is located at $v < 5000\,$[kms$^{-1}$]{}. As a result, the [$^{56}\rm{Ni}$]{} distribution is smooth, peaking at 4-5000[kms$^{-1}$]{}, but the distribution of oxygen is bimodal: the O abundance is large outside of $6000\,$[kms$^{-1}$]{}, and again below $3000\,$[kms$^{-1}$]{}. The density structure used to fit the feature also produces reasonable fits to the other feature, which however are quite noisy. The nebular model includes a total of $0.2 M_{\sun}$ of [$^{56}\rm{Ni}$]{}. The total ejected mass is $3.9 M_{\sun}$, of which 0.85[M$_{\odot}$]{} are located at $v < 5000\,$[kms$^{-1}$]{}.
The density distribution that we finally derived is shown in Figure \[fig:tomography\]. It shows the sharp increase of density at the lowest velocities, indicating the presence of in inner region dominated by oxygen.
Classical, one-dimensional explosion models do not predict the presence of any material below a minimum velocity, which represents the position of the “mass-cut”: material below this mass cut forms the compact remnant and is not ejected. Light curve studies had already suggested the presence of an inner dense core of material in some SNeIc, in particular the massive and energetic ones linked to gamma-ray bursts [@Maeda2003]. The nebular spectra of both SNe2004aw and 1998bw indicate that the inner region is dominated by oxygen, a constitutent of the stellar core, rather than by products of nucleosynthesis. The most likely explanation for this distribution of mass and abundances is that the low-velocity material was ejected in the low-energy part of an aspherical explosion. In the case of SN1998bw the entire \[\] profile is sharply peaked, indicating a grossly aspherical explosion, while in the case of SN2004aw the sharp component of the emission is narrower, suggesting that the asphericity affected only the innermost parts of the ejecta.
![\[fig:composition\] The density (upper panel) and composition structure (lower panel) we used to model SN 2004aw. The nebular spectrum probes the inner part of the ejecta out to [$\sim$]{}$7000\,$[[kms$^{-1}$]{}]{}. The lowest inner boundary for the photospheric spectra is located at $5000\,$[[kms$^{-1}$]{}]{}. []{data-label="fig:tomography"}](Fig5_04aw-comp-density.eps){width="\columnwidth"}
Bolometric Light Curve {#sec:lc}
======================
Construction of the Bolometric Light Curve
------------------------------------------
We evaluated a pseudo-bolometric light curve of SN2004aw as follows. The magnitudes reported in Table 2 of Taubenberger et al. (2006) were dereddened for Galactic extinction using $E_{B-V} = 0.37$ and the extinction curve of @Cardelli89 and converted to fluxes according to @fukugita95. Note that those magnitudes are [*all*]{} corrected for the contribution of the host galaxy, although this was evaluated with different methods for different telescopes: the KAIT magnitudes were obtained via aperture photometry on template-subtracted images, while for all the rest PSF photometry was used within SNOOPY, which includes a correction for the local background.
The light curves in the $JH$ and $K$ bands cover a much more limited time interval than the $UBVRI$ ones; therefore, we assumed that their temporal behaviour at epochs where observations are not available follows that of the $I$-band. Similarly, we assumed that the $U$- and $B$-band fluxes at the penultimate and last epochs follow the same general temporal trend as the other optical bands.
The $UBVRIJHK$ monochromatic light curves were splined with a time resolution of 1 day. The monochromatic fluxes in the various bands were linearly interpolated and extrapolated redwards and bluewards of the $K$- and $U$-band filter boundaries, respectively. Broad-band spectral energy distribution were thus constructed at each epoch and integrated in flux over the range 3000-24000Å. Any contribution at wavelengths outside the above range was ignored and is likely not to exceed 5-10%. The resulting pseudo-bolometric luminosities corresponding to the epochs of the available photometry are plotted in Figure \[fig:LC\].
In order to estimate the uncertainties, the errors associated with the optical and NIR photometry were propagated by adding them in quadrature. For the epochs where only optical photometry was available, the errors on the NIR photometry were estimated based on the errors at the epochs when data were available.
The resulting light curve is consistent with that reported in @tau06 when account is taken of the somewhat different treatment of flux at the boundaries of the wavelength range adopted for integration.
Light Curve Model
-----------------
We compare the bolometric light curve obtained as discussed above with models computed with the SN Montecarlo light curve code discussed first in @capp97 and expanded in @maz00a. Based on a density structure and a composition, the code computes the propagation and deposition of gamma-rays and positrons (using the same description as the nebular spectrum code), transforms the deposited energy into radiation and follows the propagation of the optical photons in the expanding ejecta. Although it is based on simple assumptions about the opacity [@maz00a; @hoeflich95] it does yield a reasonable representation of the bolometric light curve [[e.g. ]{} @mazzali13].
A synthetic bolometric light curve was computed for the ejecta density and abundance distribution that was obtained from spectral modelling as outlined above. Above 5000[kms$^{-1}$]{} the model is a scaled-up version of model CO21, which gave good fits to SN1994I. The photospheric spectra of SN2004aw resemble those of SN1994I at similar epochs (with respect to the $B$-band maximum), apart from having higher line velocities. This suggests a similar density structure and a similar mass-to-kinetic energy ratio for both SNe for the part of ejecta that is responsible for spectral lines in the photospheric phase. Below 5000[kms$^{-1}$]{}, however, we adopted the density derived in Sec. \[sec:neb\] to reproduce the nebular spectra. The resulting explosion model has an ejected mass of $\sim 4$[M$_{\odot}$]{}, an energy [$E_{\rm K}$]{}$\sim 4 \times 10^{51}$erg, and a [$^{56}$Ni]{} mass of $\approx 0.2$[M$_{\odot}$]{}.
![\[fig:LC\]The synthetic light curve computed using the spectroscopic results for SN2004aw (blue line) compared to the pseudo-bolometric light curve of SN 2004aw (black squares). The optical data used to construct the pseudo-bolometric light curve are shown as green symbols, while the NIR photometry is shown as red symbols. The inset is a blow-up of the peak phase.](Fig6_sn2004aw_LC_model.eps){width="88mm"}
Figure\[fig:LC\] shows the synthetic bolometric light curve compared to the observed one constructed as described above. The final bolometric light curve uses as reference the time of $B$ maximum, since the time of the explosion is not accurately known. Bolometric maximum occurs 1-3 days after $B$-band maximum according to our calculation. A pre-discovery limit was obtained on 2004 Mar 13, which precedes $B$-band maximum by $\sim$ 10 days. The light curve of SN2004aw is relatively broad for a SNIc. The rise time of the $B$-band light curve is $\sim 14$ days for the GRB/SN1998bw [@gal98] and $\sim 9$ days for the XRF/SN2006aj [@pia06], whose explosion dates are known from the accompanying GRBs. In previous theoretical studies of Type Ic SNe the assumed (or model-constrained) rise time $\sim 9-11$ days for SN1994I [@iwa94; @bar99; @sau06] and $\sim 9$ for SN2002ap [@maz02], and as much as 20 days for SN1997ef [@maz00a]. In the case of SN2004aw a rise time of 14 observed days is used, which matches the epoch of bolometric maximum in the synthetic light curve.
Although the model does not capture all undulations in the observed light curve, it does reproduce its overall flux level and the time of maximum, indicating that the model that was used has reasonable values of mass, energy, and [$^{56}$Ni]{}mass. We take this as a confirmation of the spectroscopic results. A possible reason for the discrepancies is the approximate treatment of the (gray) opacity, but this was not an issue in other cases [[e.g. ]{} @mazzali13]. Another possible source of uncertainty is that we based our calculations on a rescaled rather than a real explosion model, and this rescaling was significant. Thus our model may not fully capture the properties of a significantly more massive explosion. We therefore use generous error bars on our estimates of mass and velocity.
We can conservatively estimate that, in order to reproduce the bolometric light curve of SN2004aw, the ejecta mass should be in the range $\sim 3-5$[M$_{\odot}$]{}. For smaller masses we can expect the peak to occur too early, while for larger masses the opposite would be the case.
Discussion and Conclusions {#sec:disc}
==========================
Our models suggest that SN2004aw was the explosion of a massive C+O star that ejected $\sim 4$[M$_{\odot}$]{} of material. If we consider a range of possible remnant masses, from a neutron star with mass 1.5[M$_{\odot}$]{} up to a black hole with mass 3[M$_{\odot}$]{} this leads to an exploding CO core of $\sim 4.5-8$[M$_{\odot}$]{}. This points to a progenitor star of $M_{\rm ZAMS}\sim 23-30$[M$_{\odot}$]{}, in the context of the pre-supernova evolution models of @nom88 and @has95 (see also @woo86). The estimate of $M_{\rm ZAMS}$ may be modified if the large mass-loss required to remove the H and He envelopes is taken into consideration. For massive single stars, this strongly depends on the uncertain, mass-dependent mass-loss rate in the Wolf-Rayet stage. For example, @woo93 [@woo95] found that all their models with $M_{\rm ZAMS}\ga
35$[M$_{\odot}$]{} evolve to a narrow final C+O core mass range of $\sim 2-4$[M$_{\odot}$]{} which may be marginally compatible with SN2004aw. In contrast, @pol02 obtained a final C+O core mass in excess of 10[M$_{\odot}$]{} for $M_{\rm ZAMS} >
30$[M$_{\odot}$]{} at solar metallicity, adopting updated mass-loss rates, while models with $M_{\rm ZAMS} <30$[M$_{\odot}$]{} retain a substantial He envelope before explosion. Near-infrared data of SN2004aw [@tau06] clearly rule out the presence of significant amounts of He [@hachinger12]. SN2004aw may have been the outcome of common envelope evolution in a massive close binary, a scenario first proposed by @nom94 [@nom95] for Type Ic SNe in general and SN1994I in particular. This was developed for low-mass SNeIc but it may also work at large masses. Alternatively, binary evolution with stable mass transfer may lead to a C+O progenitor, but this mechanism may work only at lower masses than what is required for SN2004aw [@yoon2015]. At high masses, Wolf-Rayet wind mass-loss may strip stars of their outer H and He envelopes and produce the progenitors of massive SNeIc.
The explosion of SN2004aw was moderately energetic, resulting in a kinetic energy ${\ensuremath{E_{\rm K}}}\sim 4\times 10^{51}$erg. The ratio of kinetic energy and ejected mass is $\sim 1\,[10^{51}$erg/[M$_{\odot}$]{}\], similar to low-energy explosions such as SN1994I [@sau06] and significantly smaller than that of low-mass hypernovae such as SN2002ap [@maz02]. SN2004aw is often referred to as a broad-lined SNIc, even though its early-time spectra are quite different from those of hypernovae such as SN2002ap [@tau06] and very similar to those of a low-mass SNIc like SN1994I. In Figure \[fig:Sivel\] we examine the behaviour of the model photospheric velocity as a function of time compared to other SNeIb/c with similar modelling. We do not use observed line velocities as those measurements can be highly uncertain because of line blending and broadening, and because line absorption happens at different velocities for different lines. SN2004aw has no early information but at the times when it was observed the photospheric velocity follows the behaviour of SNe like 2006aj or 2008D, which do not have a very large ${\ensuremath{E_{\rm K}}}/$[$M_{\textrm{ej}}$]{}. SN2004aw also follows the behaviour of SN1994I, although it does so at higher velocities.
![\[fig:Sivel\]Model photospheric velocities in SN2004aw (pentagons) and in other SNeIb/c.](Fig7_allSNeIbc_vel_connect_04aw.eps){width="88mm"}
Could the earlier spectra of SN2004aw have shown broad lines? We mentioned above that the presence of broad lines is mostly the result of the density slope in the outermost layers. We can modify the density slope in the outermost layers, and test what the spectrum would have looked like at an epoch preceding the observed ones, making sure that the modification does not affect the earliest available spectrum.
Figure \[fig:densityslope\] shows the density profiles we have used. We modified the density only above $v = 25,000$[kms$^{-1}$]{} in order not to affect the models corresponding to epochs where observations exist showing that the lines are not broad. The modified models have different density power-law indices above that velocity, as marked in the figure. Figure \[fig:mockd7spec\] shows the spectra at 7 days after explosion. Spectra computed with increasingly flatter slopes show broader lines. These lines eventually blend, reducing the number of observed features, as described in @Prentice2017a. Figure \[fig:mockd15spec\] shows the spectra at 15 days after explosion. At this epoch all spectra are basically the same, showing that changing the outer density slope at high velocity has no impact at later times.
![\[fig:densityslope\]Modified density profiles used to test the earliest properties of SN2004aw.](Fig8_CO21x3_n.eps){width="88mm"}
![\[fig:mockd7spec\]Synthetic day 7 spectra obtained with the different input model density profiles used to test the earliest properties of SN2004aw. ](Fig9_specseq_d7_rho.eps){width="88mm"}
![\[fig:mockd15spec\]Synthetic day 15 spectra obtained with the different input model density profiles used to test the earliest properties of SN2004aw.](Fig10_specseq_d15_rho.eps){width="88mm"}
The models we computed all have [$M_{\textrm{ej}}$]{}$\approx 4\,$[M$_{\odot}$]{}, but vary in [$E_{\rm K}$]{} from $4 \times 10^{51}$erg for model CO21x3, which has the steepest outer density slope, to $6.6 \times 10^{51}$erg for the model with outer density slope power-law index $n=2$. The ratio [$E_{\rm K}$]{}/[$M_{\textrm{ej}}$]{} ranges therefore from $\approx 1$ to $\approx 1.6$, which spans some of the range of observed SNIb/c properties, although it does not reach the highest values. If [$E_{\rm K}$]{}/[$M_{\textrm{ej}}$]{} was larger the spectrum at 15 days would be affected. This sets the uncertainty on the [$E_{\rm K}$]{}estimate for SN2004aw, and it is an uncertainty that should be applied to all narrow-line SNeIb/c with no early data. It also affects the detailed classification of the SN, as the number of lines, which is 6 at maximum, can be anything between 4 and 6 one week earlier @Prentice2017a.
Given these uncertainties, we can conservatively estimate that SN2004aw ejected $4 \pm 1\,$[M$_{\odot}$]{} of material with a kinetic energy [$E_{\rm K}$]{}$\approx 4.5
\pm 1.5 \times 10^{51}$erg.
The value of $E_{\rm K}$ determined from modelling, although not as extreme as the $\sim 10^{52}$ ergs of hypernovae, is probably too large for the canonical “delayed neutrino-heating” SN mechanism taking place in a proto-neutron star [@jan07]. In fact, the ZAMS mass range of $\sim 23-30$M$_{\sun}$ spans the putative upper boundary for neutron star formation at the end of core collapse and a black hole may be preferred [@fry99]. However, doubts have been cast on the sharpness of this separation [@ugliano12]. If the progenitor core did collapse to a black hole, the SN explosion could have been set off by a central engine comprising of the black hole and an accretion disk as proposed in the collapsar model, providing that the progenitor star was rotating rapidly [@mac99]. On the other hand, if the result of the collapse was a neutron star, the explosion could have been aided by a magnetar, which may have injected energy into the SN ejecta, contributed to the synthesis of [$^{56}$Ni]{}, or both [@maz06a; @mazzali14]. @fryyou07 simulated the core collapse of a 23 M$_{\sun}$ star and found a long delay to explosion, which may allow time for large magnetic fields to develop in the proto-neutron star.
Adding SN2004aw to the plots showing the main properties of SNeIb/c that have been modelled in detail (Figure \[fig:KEvMej\]), we see that it confirms the trend for increasing [$E_{\rm K}$]{} with increasing ejected mass. This plot is affected for some SNe by the presence of an outer He shell (SNeIb) as well as some H (SNeIIb). We can however use the mass of the CO core to reconstruct the ZAMS mass of the progenitor. Such a plot (Figure \[fig:KEvM\]) shows a more linear relation between [$E_{\rm K}$]{} and progenitor mass, although there seems to be some spread in the value of [$E_{\rm K}$]{} at progenitor masses between $\sim 20$ and 30[M$_{\odot}$]{}. Among SNe in this mass range that have been studied, SN2004aw has one of the smallest ratios of [$E_{\rm K}$]{} per inferred progenitor mass, and it is in fact the only one that does not show broad lines in its spectra.
![\[fig:KEvMej\]Kinetic energy vs. ejected mass in SN2004aw (circles) and in other SNeIc. ](Fig11_KEvMej_04aw.eps){width="91mm"}
![\[fig:KEvM\]Kinetic energy vs. inferred progenitor mass in SN2004aw (circles) and in other SNeIc. Colour coding as in Fig. \[fig:KEvMej\].](Fig12_KEvMass_04aw.eps){width="92mm"}
![\[fig:NivKE\][$^{56}$Ni]{} mass vs. kinetic energy in SN2004aw (circles) and in other SNeIc. ](Fig13_NivKE_04aw.eps){width="92mm"}
![\[fig:NivMej\][$^{56}$Ni]{} mass vs. ejected mass in SN2004aw (circles) and in other SNeIc. Colour coding as in Fig. \[fig:KEvMej\].](Fig14_NivMej_04aw.eps){width="92mm"}
![\[fig:NivM\][$^{56}$Ni]{} mass vs. inferred progenitor mass in SN2004aw (circles) and in other SNeIc. Colour coding as in Fig. \[fig:KEvMej\].](Fig15_NivMass_04aw.eps){width="92mm"}
An amount of $\sim 0.20$ M$_{\sun}$ of $^{56}$Ni was ejected to power the light curve of SN2004aw. Allowing for the uncertainties in the adopted distance modulus and total reddening [@tau06] this value may vary from $\sim 0.15$ to 0.25 M$_{\sun}$. This value lies between the $\sim 0.3-0.6$ M$_{\sun}$ $^{56}$Ni of the three hypernovae connected with classical long GRBs (SNe 1998bw, 2003dh, and 2003lw; @nak01 [@den05; @maz06b]) and the $\sim
0.07-0.1$ M$_{\sun}$ of the low-energy SNIc 1994I [@iwa94] but also of SN2002ap [@maz02; @tom06]. On the other hand, it is comparable to the value inferred for the Type Ic SN2006aj, which was accompanied by X-ray Flash 060218 and for the non-GRB hypernova SN1997ef. SN2006aj has been modelled to have had [$E_{\rm K}$]{}$\approx 2\times 10^{51}$ergs and was suggested to have been a magnetar-induced explosion of a star of $M_{\rm ZAMS}\sim 20$ M$_{\sun}$ [@maz06a], while SN1997ef had $M_{\rm ZAMS}\sim 35$ M$_{\sun}$ and [$E_{\rm K}$]{}$\approx 10^{52}$ergs [@iwa00; @maz04]. The production of explosively synthesized [$^{56}$Ni]{} appears broadly to increase with explosion energy (Figure \[fig:NivKE\]) as well as with ejected mass (Figure \[fig:NivMej\]), and the relation gets tighter if inferred progenitor mass is used, which eliminates the influence of the outer stellar envelopes on the mass estimate (Figure \[fig:NivM\]). Still, in the range of values where SN2004aw is located there is significant dispersion. Several factors may be at play: in SN2006aj magnetar energy may have contributed to increasing the [$^{56}$Ni]{}production. In the case of a massive progenitor such as that of SN1997ef, fallback onto a black hole may have limited the amount of [$^{56}$Ni]{} that finally was ejected. Other parameters, such as binarity, rotation, metallicity, and asymmetry, may affect the outcome of the explosion. This highlights how little we still know about SNIb/c explosions.
The properties of stripped-envelope core-collapse SNe seem particularly diverse in the progenitor mass range $M_{\rm ZAMS}\sim 20-30$ M$_{\sun}$. While the XRF-SN2006aj is strikingly similar to SN2002ap in light curves and spectra, SN2002ap has no GRB association at all despite being one of the best-observed SNeIc.
The Type Ib SN2008D, was captured by the Swift X-Ray Telescope at its very initial flash of soft X-rays [@sod08] and was soon confirmed in the optical [@den08]. With the nature of its initial flash under debate, spectrum and light curve modelling suggests that it had a progenitor with $M_{\rm ZAMS}\sim
20 - 30$ M$_{\sun}$ [@maz08; @tan09]. Another peculiar SNIb was 2005bf ($M_{\rm ZAMS}\sim 25-30$ M$_{\sun}$; @tom05) whose composite light curve reached a first peak compatible with moderate [$^{56}$Ni]{} production and then rose for as long as $\sim 40$ days to reach a second peak, which would require $\sim 0.3$ M$_{\sun}$ $^{56}$Ni. Late-time observations showed that the light curve then traced back the predicted extension of a normal-luminosity SN, suggesting that the second peak was due to the late injection of magnetar energy [@maeda07].
Although our models are one-dimensional, asphericity would probably not change our results much. Asphericity was observed in some Type Ic SNe with no GRB connection, first using polarimetric measurements (e.g., @wan01 [@kaw02]), and then also through revealing line profiles in nebular spectra (e.g, @maz05 [@mae08]). The asphericity in SN2004aw is significant, but it appears to be confined to the deepest layers of the SN ejecta, while in SN1998bw it affected probably the entire SN ejecta. As already mentioned, what is suggested by the nebular spectra of SN2004aw is an explosion that was fairly spherical in the outer layers, but significantly aspherical in the deepest parts. It may heve been the result of a magnetar or a collapsar which did not inject enough energy to modify the entire SN structure, but did leave an imprint in the regions close to the site of collapse. This may be related to the fact that SN2004aw had a smaller mass than any GRB/SN. Studies of further examples of energetic SNeIc are required in order to further our understanding the presence and impact of an engine.
Acknowledgments {#acknowledgments .unnumbered}
===============
We gratefully acknowledge convertations with G. Meynet during the KITP 2017 Programme on Massive stars. ST acknowledges support by TRR33 “The Dark Universe” of the German Research Foundation.
Abbott, D.C., Lucy, L.B., 1985, ApJ, 288, 679
Ashall C., et al., 2017, preprint (arXiv:1702.04339)
Axelrod T., 1980, Ph.D. thesis, Univ. California, Santa Cruz
Baron, E., Branch, D., Hauschildt, P.H., Filippenko, A.V., Kirshner, R.P. 1999, ApJ, 527, 739
Ben-Ami, S., et al. 2012, , 760, L33
Bianco F. B., et al. 2014, , 213, 19
Cappellaro, E., Mazzali, P. A., Benetti, S., Danziger, I. J., Turatto, M., della Valle, M. & Patat, F. 1997, A&A, 328 , 203
Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
Clocchiatti, A., Wheeler, J.C., 1997, ApJ, 491, 375
Deng, J., Tominaga, N., Mazzali, P.A., Maeda, K., Nomoto, K. 2005, ApJ, 624, 898
Deng, J., Zhu, Y. 2008, GCN, 7160
Eldridge J. J., Fraser M., Smartt S. J., Maund J. R., Crockett R. M., 2013, MNRAS, 436, 774
Filippenko, A. V., 1997, ARA&A, 35, 30
Fryer, C.L., 1999, ApJ, 522, 413
Fryer, C.L., Young, P.A. 2007, , 659, 1438
Fryer, C.L., et al., 2007, PASP, 119, 1211
Fukugita, M., Shimasaku, K., & Ichikawa, T., 1995, PASP, 107, 945
Galama, T.J., et al., 1998, Nature, 395, 670
Greiner J. et al., 2015, Nature, 523, 189
Hachinger, S., Mazzali, P. A., Taubenberger, S., Hillebrandt, W.; Nomoto, K.; Sauer, D. N. 2012, , 422, 70
Hashimoto, M., 1995, Prog. Theor. Phys., 94, 663
Hoeflich, P., Khokhlov, A. M., & Wheeler, J. C. 1995, , 444, 831
Iwamoto, K., Nomoto, K., Hoflich, P., Yamaoka, H., Kumagai, S., Shigeyama, T. 1994, ApJ, 437, L115
Iwamoto, K., et al., 1998, Nature, 395, 672
Iwamoto, K., et al., 2000, ApJ, 534, 660
Janka, H.-Th., Langanke, K., Marek, A., Martínez-Pinedo, G., Müller, B. 2007, , 442, 38
Kawabata, K.S., et al., 2002, ApJ, 580, L39
Kawabata, K.S., et al., 2003, ApJ, 593, L19
Lucy, L.B., 1999, A&A, 345, 211
Lyman J. D., Bersier D., James P. A., Mazzali P. A., Eldridge J. J., Fraser M., Pian E., 2016, , 457, 328
MacFadyen, A.I., Woosley, S.E., 1999, ApJ, 524, 262
Maeda, K., Nakamura, T., Nomoto, K., Mazzali, P.A., Patat, F., Hachisu, I., 2002, ApJ, 565, 405
Maeda, K., Mazzali, P. A., Deng, J., Nomoto, K., Yoshii, Y., Tomita, H., Kobayashi, Y., 2003, , 593, 931
Maeda, K., et al. 2007, , 666, 1069
Maeda, K., et al., 2008, Science, 29, 1220
Matheson T., Filippenko A. V., Li W., Leonard D. C., Shields J. C., 2001, AJ, 121, 1648
Maurer, J. I., et al. 2010, , 402, 161
Mazzali, P.A., 2000, A&A, 363, 705
Mazzali, P.A., Lucy, L.B. 1993, A&A, 279, 447
Mazzali, P. A., Iwamoto, K., & Nomoto, K. 2000, , 545, 407
Mazzali, P.A., Nomoto, K., Patat, F., Maeda, K., 2001, ApJ, 559, 1047
Mazzali, P.A., et al., 2002, ApJ, 572, L61
Mazzali, P.A., Deng, J., Maeda, K., Nomoto, K., Filippenko, K., Matheson, T., 2004, ApJ, 614, 858
Mazzali, P.A., et al., 2005, Science, 308, 1284
Mazzali, P.A., et al., 2006a, ApJ, 645, 1323
Mazzali, P.A., et al., 2006b, Nature, 442, 1018
Mazzali, P. A., Foley, R. J., Deng, J., et al. 2007, , 661, 892
Mazzali, P.A., et al., 2007, ApJ, 670, 592
Mazzali, P.A., et al., 2008, Science, 321, 1185
Mazzali, P. A., Maurer, I., Valenti, S., Kotak, R., & Hunter, D. 2010, , 408, 87
Mazzali, P. A., Walker, E. S., Pian, E., Tanaka, M.,; Corsi, A., Hattori, T., Gal-Yam, A., 2013, , 432, 2463
Mazzali, P. A., Sullivan, M., Hachinger, S., et al. 2014, , 439, 1959
Modjaz M., Liu Y. Q., Bianco F. B., Graur O., , 832, 108
Nakamura, T., Mazzali, P.A., Nomoto, K., Iwamoto, K., 2001, ApJ, 550, 991
Nomoto, K., Hashimoto, M., 1988, Phys. Rep., 163, 13
Nomoto, K., Yamaoka,H., Pols, O.R., van den Heuvel, E.P.J., Iwamoto, K., Kumagai, S., Shigeyama, T., 1994, Nature, 371, 227
Nomoto, K., Iwamoto, K., Suzuki, T., 1995, Phys. Rep., 256, 173
Pian, E., et al., 2006, Nature, 442, 1011
Pols, O.R., Dewi, D.M., 2002, PASA, 19, 233
Prentice S. J., et al., 2016, , 458, 2973
Prentice S. J., & Mazzali, P. A., 2017, , in press (arXiv:1704.08298)
Richmond, M.W., et al., 1996, ApJ, 111, 327
Sauer, D.N., Mazzali, P.A., Deng, J., Valenti, S., Nomoto, K., Filippenko, A.V., 2006, MNRAS, 369, 1939
Soderberg, A.M., et al., 2008, Nature, 453, 469
Stehle, M., Mazzali, P.A., Benetti, S., Hillebrandt, W., 2005, MNRAS, 360, 1231
Taddia F., et al., 2015, , 574, A60
Tanaka, M., et al. 2009, , 692, 1131
Taubenberger, S., et al., 2006, MNRAS, 371, 1459
Tominaga, N., et al., 2005, ApJ, 633, L97
Tomita, H., et al., 2006, ApJ, 644, 400
Ugliano, M., Janka, H.-T., Marek, A., & Arcones, A. 2012, , 757, 69
Valenti, S., et al., 2008, MNRAS, 383, 1485
Wang, L., Howell, D.A., Höflich, P., Wheeler, J.C., 2001, ApJ, 550, 1030
Woosley, S.E., Bloom, J.S., 2006, ARA&A, 44, 507
Woosley, S.E., Weaver, T.A., 1986, ARA&A, 24, 205
Woosley, S.E., Langer, N., Weaver, T.A., 1993, ApJ, 411, 823
Woosley, S.E., Langer, N., Weaver, T.A., 1995, ApJ, 448, 315
Yoon, S.-C., 2015, PASA, 32, 15
\[lastpage\]
[^1]: E-mail: P.Mazzali@ljmu.ac.uk
|
=16.5cm
**Relation between Yang-Baxter and Pair Propagation Equations**
**in 16-Vertex Models**
Changrim Ahn [^1]
[*Department of Physics*]{}
[*Ewha Women’s University, Seoul 120, Korea*]{}
0.1in
Minoru Horibe [^2]
[*Department of Physics, Faculty of Education*]{}
[*Fukui University,Fukui 910, Japan*]{}
0.1in
Kazuyasu Shigemoto [^3]
[*Department of Physics*]{}
[*Tezukayama University, Nara 631, Japan* ]{}
**Abstract**
We study a relation between two integrability conditions, namely the Yang-Baxter and the pair propagation equations, in 2D lattice models. While the two are equivalent in the 8-vertex models, discrepancies appear in the 16-vertex models. As explicit examples, we find the exactly solvable 16-vertex models which do not satisfy the Yang-Baxter equations.
[**1. Introduction**]{}\
In last two decades, much progress has been made in 2D integrable systems both in lattice statistical models and in continuum field theories. Recently, this progress has been associated with beautiful mathematical structures such as universal Grassmann manifold [@Grass], Kac-Moody algebra [@Kac] and quantum group [@qg]. In 2D lattice models, there is one approach, which is based on transfer matrices (TMs) and it has been proved most successful. As Baxter showed, one can construct infinite number of commuting conserved quantities through these TMs [@Baxter]. A sufficient condition for the commutativity is that the Boltzmann weights of the 2D lattice models satisfy the famous Yang-Baxter equations (YBEs). There can obviously exist many exactly solvable models which do not satisfy YBEs. Since these are exactly solvable, one needs another scheme to solve these models if they exist. There is another approach, which is based on the so-called pair propagation equations (PPEs) appearing in the analysis of the algebraic Bethe ansatz. According to this method, the Boltzmann weights satisfy non-linear coupled equations. These equations become manageable if the Boltzmann weights are defined on some algebraic curves. In this paper, we want to study some 2D lattice models which can be exactly solvable while they do not satisfy YBEs. We are looking for our candidates from the 16-vertex models [@Feld][@Bellon]. What we are going to show first is a relationship between YBEs and PPEs. Though YBEs and PPEs are equivalent in the 8-vertex model, discrepancies appear in 16-vertex models. Since YBEs restrict possible candidates so strongly, PPEs can cover more exactly solvable models which do not satisfy YBEs. We give explicit examples for which we compute exact eigenvalues of transfer matrices.\
[**2. The Pair Propagation and Conjugate Pair Propagation Equations**]{}\
We follow the notation of Baxter [@Baxter]. The Boltzmann weights of the symmetric 16-vertex models are given by $$\begin{aligned}
R(\pm,\pm;\pm,\pm)=a, R(\pm,\mp;\pm,\mp)=b,
R(\pm,\mp;\mp,\pm)=c,R(\pm,\pm;\mp,\mp)=d,
\nonumber \\
R(\pm,\mp;\mp,\mp)=e,R(\pm,\pm;\mp,\pm)=k,
R(\pm,\pm;\pm,\mp)=h,R(\mp,\pm;\mp,\mp)=l.
\label{e1}\end{aligned}$$ The Yang-Baxter equations are given in the forms; $$\begin{aligned}
\sum_{\eta, \zeta, \phi} R(\mu,\zeta;\eta,\beta)
R^{'}(\rho,\alpha;\phi,\zeta )R^{''}(\eta,\phi;\nu,\sigma)
\nonumber \\
=\sum_{\eta, \zeta, \phi} R^{''}(\mu,\rho;\eta,\phi)
R^{'}(\phi,\zeta;\sigma,\beta) R(\eta,\alpha;\nu,\zeta).
\label{e8}\end{aligned}$$ According to the Bethe ansatz, eigenfunctions $y(\beta_1,\beta_2,...,\beta_N)$ of transfer matrices $T(v)$ for $N$ horizontal sites become in the forms of the direct products of each variables such as $y(\beta_1,\beta_2,...,\beta_N)=g_1(\beta_1)
\otimes g_2(\beta_2)\otimes...\otimes g_N(\beta_N)$. These are eigenfunctions of transfer matrix on the upper layer. We multiply these eigenfunctions to transfer matrices, and obtain $$\left(T(v)y\right)_{\alpha}={\rm Tr}\left(G_1(\alpha_1)...G_N(\alpha_N)
\right),\quad {\rm with}\quad
\left( G_i(\alpha)\right)_{\mu \nu}
=\sum_{\beta}R_(\mu,\alpha;\nu,\beta) g_i(\beta). \label{e24}$$ Explicit forms of $G_i(\pm)$ are $$\begin{aligned}
G_i(+)=
\left(\begin{array}{cc}
ag_i(+)+hg_i(-) & kg_i(+)+dg_i(-) \\
eg_i(+)+cg_i(-) & bg_i(+)+lg_i(-)
\end{array} \right) , \nonumber \\
G_i(-)=
\left( \begin{array}{cc}
lg_i(+)+bg_i(-) & cg_i(+)+eg_i(-) \\
dg_i(+)+kg_i(-) & hg_i(+)+ag_i(-)
\end{array} \right). \nonumber\end{aligned}$$ In order to be solved exactly, it is necessary that there exist the $\alpha$-independent pairs of matrices $P_i,P_{i+1}$, which transform $G_i(\alpha)$ into upper triangle forms; $$P_i^{-1} G_i(\alpha) P_{i+1} =H_i(\alpha)=
\left( \begin{array}{cc}
g_i^{'}(\alpha) & g_i^{'''}(\alpha) \\
0 & g_i{''}(\alpha)
\end{array} \right). \label{e27}$$ For simplicity, we choose $\det P_i=1$ for $i=1,2,...,N$ and we parametrize them in the forms; $P_i=
\left( \begin{array}{cc}
p_i(+) & t_i(+) \\
p_i(-) & t_i(-)
\end{array} \right)$. Then Eq. (\[e27\]) is written in the forms ; $$G_i(\alpha) P_{i+1}=P_i H_i(\alpha)
,\quad {\rm or}\quad
P_{i}^{-1} G_i(\alpha)=H_i(\alpha) P_{i+1}^{-1}. \label{e30}$$ As $H_i(\alpha)$ are in upper triangle forms, we obtain; $$\begin{aligned}
&&G_i(\alpha)
\left(\begin{array}{c}
p_{i+1}(+) \\
p_{i+1}(-)
\end{array} \right)
= g_{i}^{'}(\alpha)
\left(\begin{array}{c}
p_{i}(+) \\
p_{i}(-)
\end{array} \right), \label{e31} \\
&&\left(\begin{array}{cc}
-p_{i}(-), & p_{i}(+)
\end{array} \right)
G_i(\alpha)
= g_{i}^{''}(\alpha)
\left(\begin{array}{cc}
-p_{i+1}(-), & p_{i+1}(+)
\end{array} \right) . \label{e32}\end{aligned}$$ We call Eq. (\[e31\]) as the pair propagation equations (I) and Eq. (\[e32\]) as the conjugate pair propagation equations (I). By using $R(\mu,\alpha;\nu,\beta)$, the pair propagation equations (I) are given by $$\sum_{\beta,\nu} R(\mu,\alpha;\nu,\beta) g_i(\beta) p_{i+1}(\nu)
=g_{i}^{'}(\alpha) p_i(\mu) . \label{e33}$$ Explicit forms of these equations are $$\begin{aligned}
\left\{ \begin{array}{l}
(ag_i(+)+hg_i(-))p_{i+1}(+)+(kg_i(+)+dg_i(-))p_{i+1}(-)
=g_{i}^{'}(+)p_i(+), \\
(lg_i(+)+bg_i(-))p_{i+1}(+)+(cg_i(+)+eg_i(-))p_{i+1}(-)
=g_{i}^{'}(-)p_i(+), \\
(eg_i(+)+cg_i(-))p_{i+1}(+)+(bg_i(+)+lg_i(-))p_{i+1}(-)
=g_{i}^{'}(+)p_i(-), \\
(dg_i(+)+kg_i(-))p_{i+1}(+)+(hg_i(+)+ag_i(-))p_{i+1}(-)
=g_{i}^{'}(-)p_i(-). \end{array}\right. \label{e34}\end{aligned}$$ While the conjugate pair propagation equations (I) are given by $$\sum_{\beta,\mu} R(\mu,\alpha;\nu,\beta) g_i(\beta) q_{i}(\mu)
=g_{i}^{''}(\alpha) q_{i+1}(\nu) , \label{e35}$$ where we use notation $q_i(+)=-p_i(-),q_i(-)=p_i(+)$. Explicit forms of these equations are obtained from Eq. (\[e34\]) by replacing $g_{i}^{'}(\pm)\rightarrow g_{i}^{''}(\pm)$, $p_{i+1}(+)\rightarrow
-p_{i}(-)$, $p_{i+1}(-)\rightarrow p_{i}(+)$, $p_{i}(+)\rightarrow -p_{i+1}(-)$, $p_{i}(-)\rightarrow p_{i+1}(+)$, $c \leftrightarrow d$, $e \leftrightarrow k$. Next we consider the second type of the pair propagation equations, which we call the pair propagation equations (II). If the models are exactly solvable by using eigenfunctions $y(\beta_1,\beta_2,...,\beta_N)=g_1(\beta_1)
\otimes g_2(\beta_2)\otimes...\otimes g_N(\beta_N)$ acting on upper layer of the transfer matrices, it is exacly solvable by using eigenfunction $\tilde{y}(\alpha_1,\alpha_2,...,\alpha_N)
=\tilde{g}_1(\alpha_1)\otimes \tilde{g}_2(\alpha_2)
\otimes...\otimes \tilde{g}_N(\alpha_N)$ acting on lower layer. Similar equations corresponding to Eqs. (\[e24\]),(\[e27\]) are given by $$\left(\tilde{y}T(v)\right)_{\beta}={\rm Tr}
\left(\tilde{G}_1(\beta_1)
...\tilde{G}_N(\beta_N)\right)
,\quad{\rm with}\quad
\left( \tilde{G}_i(\beta)\right)_{\mu \nu}
=\sum_{\alpha}R_(\mu,\alpha;\nu,\beta) \tilde{g}_i(\alpha),
\label{e38}$$ and $$\begin{aligned}
\tilde{P}_i^{-1} \tilde{G}_i(\beta) \tilde{P}_{i+1}
=\tilde{H}_i(\alpha)=
\left( \begin{array}{cc}
\tilde{g}_i^{'}(\beta) & \tilde{g}_i^{'''}(\beta) \\
0 & \tilde{g}_i{''}(\beta)
\end{array} \right). \label{e39}\end{aligned}$$ The pair propagation equations (II) are given by $$\sum_{\alpha,\nu} R(\mu,\alpha;\nu,\beta)
\tilde{g}_i(\alpha) \tilde{p}_{i+1}(\nu)
=\tilde{g}_{i}^{'}(\beta) \tilde{p}_i(\mu). \label{e40}$$ By the symmetry of the Boltzmann weights, explicit forms of these pair propagation equations (II) are obtained from the pair propagation equations (I) by replacing untilde variables into tilde variables and $c\leftrightarrow d, h\leftrightarrow l$. Similarly we obtain the conjugate pair propagation equations (II) from Eq. (\[e35\]) by the same replacement. Our strategy to solve the pair propagation equations is the following. These pair propagation and conjugate pair propagation equations are special bilinear equations of four variables such as $g_{i}(\pm),\ g_{i}^{'}(\pm), p_{i}(\pm),p_{i+1}(\pm)$, and it is difficult to solve these equations directly. Then we first derive non-linear equations, where only two ratio of the variables such as $r_{i}=p_{i}(-)/p_{i}(+),\ r_{i+1}=p_{i+1}(-)/p_{i+1}(+)$ appear. Instead of solving four coupled equations in the pair propagation equations, we first solve these four non-linear equations. For each solution of these non-linear but two variable equations, only one of four equations of the pair propagation equations is independent, and from that we can obtain the eigenvalues of the transfer matrices. From condition that there exists non-trivial solutions for $g_{i}(\pm),g_{i}^{'}(\pm)$ in Eq. (\[e34\]), we obtain $$\begin{aligned}
&&r_{i}^2+r_{i+1}^2-\Gamma_1 (r_{i}^2 r_{i+1}^2+1)-\Gamma_2 r_{i}
r_{i+1} +\Gamma_3 r_{i}(1+r_{i+1}^2)+\Gamma_4 r_{i+1}(1+r_{i}^2)=0
\label{e43} \\
&&{\rm where} \nonumber \\
&&\left\{ \begin{array}{l}
\Gamma_1=(cd-ek)/(ab-hl), \quad
\Gamma_2=(a^2+b^2+e^2+k^2-c^2-d^2-h^2-l^2)/(ab-hl), \\
\Gamma_3=(cl+dh-ak-be)/(ab-hl), \quad
\Gamma_4=(ae+bk-ch-dl)/(ab-hl) \end{array} \right.
\nonumber \\
&&{\rm and} \nonumber \\
&&r_{i}=p_{i}(-)/p_{i}(+),
\quad r_{i+1}=p_{i+1}(-)/p_{i+1}(+) . \nonumber\end{aligned}$$ From the condition that there exists non-trivial solutions for $p_{i}(\pm),p_{i+1}(\pm)$, we obtain $$\begin{aligned}
&&s_{i}^2+{s_{i}^{'}}^2-\Gamma_{5} (s_{i}^2 {s_{i}^{'}}^2+1)
-\Gamma_{6} s_{i} s_{i}^{'}
+\Gamma_{7} s_{i}^{'}(1+s_{i}^2)
+\Gamma_{8} s_{i}(1+{s_{i}^{'}}^2)=0 \label{e45} \\
&&{\rm where} \nonumber \\
&&\left\{ \begin{array}{l}
\Gamma_{5}=(cd-hl)/(ab-ek), \quad
\Gamma_{6}=(a^2+b^2+h^2+l^2-c^2-d^2-e^2-k^2)/(ab-ek), \\
\Gamma_{7}=(ce+dk-ah-bl)/(ab-ek), \quad
\Gamma_{8}=(al+bh-ck-de)/(ab-ek), \end{array} \right.
\nonumber \\
&& {\rm and} \nonumber \\
&& s_{i}=g_{i}(-)/g_{i}(+),
\quad s^{'}_{i}=g_{i}^{'}(-)/g_{i}^{'}(+). \nonumber\end{aligned}$$ Simlarly, from the condition that there exists non-trivial solutions for $g_{i}^{'}(\pm),p_{i+1}(\pm)$,we obtain $$\begin{aligned}
&&r_{i}^2+{s_{i}}^2-\Gamma_{9} (r_{i}^2 {s_{i}}^2+1)
-\Gamma_{10} r_{i} s_{i}
+\Gamma_{11} r_{i}(1+s_{i}^2)
+\Gamma_{12} s_{i}(1+r_{i}^2)=0 \label{e47} \\
&&{\rm where} \nonumber \\
&&\left\{\begin{array}{l}
\Gamma_{9}=(bd-eh)/(ac-kl), \quad
\Gamma_{10}=(a^2+c^2+e^2+h^2-b^2-d^2-k^2-l^2)/(ac-kl), \\
\Gamma_{11}=(bl+dk-ah-ce)/(ac-kl), \quad
\Gamma_{12}=(ae+ch-bk-dl)/(ac-kl). \end{array} \right.
\nonumber\end{aligned}$$ Finally, from the condition that there exists non-trivial solutions for $g_{i}(\pm),p_{i}(\pm)$, we obtain $$\begin{aligned}
&&{s_{i}^{'}}^2+r_{i+1}^2-\Gamma_{13} ({s_{i}^{'}}^2r_{i}^2+1)
-\Gamma_{14} s_{i}^{'} r_{i+1}
+\Gamma_{15} s_{i}^{'}(1+r_{i+1}^2)
+\Gamma_{16} r_{i+1}(1+{s_{i}^{'}}^2)=0 \label{e49} \\
&&{\rm where} \nonumber \\
&&\left\{ \begin{array}{l}
\Gamma_{13}=(bd-kl)/(ac-eh), \quad
\Gamma_{14}=(a^2+c^2+k^2+l^2-b^2-d^2-e^2-h^2)/(ac-eh), \\
\Gamma_{15}=(be+dh-ak-cl)/(ac-eh), \quad
\Gamma_{16}=(al+ck-bh-de)/(ac-eh). \end{array} \right.
\nonumber\end{aligned}$$ While we obtain the conjugate pair propagation equations (II) from the pair propagation equations (I) by replacing $p_i,p_{i+1},g_i,g_{i}^{'}
\rightarrow \tilde{p}_i,\tilde{p}_{i+1},\tilde{g}_i,\tilde{g}_{i}^{'}$ , that is, by replacing ratios $r_i,r_{i+1},s_{i},s_{i}^{'} \rightarrow
\tilde{r}_{i},\tilde{r}_{i+1},\tilde{s}_{i},\tilde{s}_{i}^{'}$ and $c \leftrightarrow d$, $h \leftrightarrow l$. Explicit forms are given by $$\begin{aligned}
&&\tilde{r}_{i}^2+\tilde{r}_{i+1}^2
-\Gamma_1 (\tilde{r}_{i}^2 \tilde{r}_{i+1}^2+1)
-\Gamma_2 \tilde{r}_{i} \tilde{r}_{i+1}
+\Gamma_3 \tilde{r}_{i}(1+\tilde{r}_{i+1}^2)
+\Gamma_4 \tilde{r}_{i+1}(1+\tilde{r}_{i}^2)=0 \label{e50} \\
&&\tilde{s}_{i}^2+{\tilde{s_{i}^{'}}}^2
-\Gamma_{5} (\tilde{s}_{i}^2 {\tilde{s_{i}^{'}}}^2+1)
-\Gamma_{6} \tilde{s}_{i} \tilde{s_{i}}^{'}
-\Gamma_{8} \tilde{s}_{i}^{'}(1+\tilde{s}_{i}^2)
-\Gamma_{7} \tilde{s}_{i}(1+{\tilde{s_{i}^{'}}}^2)=0 .
\label{e52}\end{aligned}$$\
[**3. Connection between the Yang-Baxter and the Pair Propagation Equations**]{}\
Next we consider the connection between the Yang-Baxter and the pair propagation equations. We consider products of three $R$-matrices in Eq. (\[e8\]) as matrices with row indexed by $\beta,\mu,\rho$ and with column indexed by $\alpha,\nu,\sigma$. We denote quantities in left-hand side as $A(\beta,\mu,\rho|\alpha,\nu,\sigma)$ and those in right-hand side as $B(\beta,\mu,\rho|\alpha,\nu,\sigma)$. To show the relations $A(\beta,\mu,\rho|\alpha,\nu,\sigma)
=B(\beta,\mu,\rho|\alpha,\nu,\sigma)$ is equivalent to show $$\sum_{\alpha,\nu,\rho} A(\beta,\mu,\rho|\alpha,\nu,\sigma)
v_{1}(\alpha)v_{2}(\nu)v_{3}(\sigma)
=\sum_{\alpha,\nu,\rho} B(\beta,\mu,\rho|\alpha,\nu,\sigma)
v_{1}(\alpha)v_{2}(\nu)v_{3}(\sigma).
\label{e42}$$ for arbitrary three vectors $v_{1}(\alpha)v_{2}(\nu)v_{3}(\sigma)$. Explicit forms of these equations are given by $$\begin{aligned}
\sum_{\eta,\zeta,\phi,\alpha,\nu,\sigma} R(\mu,\zeta;\eta,\beta)
R^{'}(\rho,\alpha;\phi,\zeta)R^{''}(\eta,\phi;\nu,\sigma)
v_1(\alpha)v_2(\nu)v_3(\sigma)
\nonumber \\
=\sum_{\eta, \zeta, \phi,\alpha,\nu,\sigma} R^{''}(\mu,\rho;\eta,\phi)
R^{'}(\phi,\zeta;\sigma,\beta)R(\eta,\alpha;\nu,\zeta)
v_1(\alpha)v_2(\nu)v_3(\sigma) , \label{e53}\end{aligned}$$ We can transform these by using the pair propagation equations as $$\begin{aligned}
{\rm (left-hand\ side)}
&&\equiv \sum_{\eta, \zeta, \phi,\alpha} R(\mu,\zeta;\eta,\beta)
R^{'}(\rho,\alpha;\phi,\zeta)
v_1(\alpha)u_2^{'}(\eta)u_3^{'}(\phi) \nonumber \\
&&\equiv \sum_{\eta,\zeta}
R(\mu,\zeta;\eta,\beta)
t_1^{'}(\zeta)u_2^{'}(\eta)t_3^{'}(\rho) \nonumber \\
&&\equiv z_1{'}(\beta) z_2{'}(\mu) t_3{'}(\rho) , \label{e54} \\
{\rm (right-hand\ side)}
&&\equiv \sum_{\eta, \zeta, \phi,\sigma} R^{''}(\mu,\rho;\eta,\phi)
R^{'}(\phi,\zeta;\sigma,\beta)
u_1{''}(\zeta)u_2{''}(\eta)v_3(\sigma) \nonumber \\
&&\equiv \sum_{\eta, \phi} R^{''}(\mu,\rho;\eta,\phi)
t_1{''}(\beta)u_2{''}(\eta)t_3{''}(\phi) \nonumber \\
&&\equiv t_1{''}(\beta)z_2{''}(\mu)z_3{''}(\rho). \label{e55}\end{aligned}$$ Ratios of vectors change in the following way; $$\begin{aligned}
{\rm (left-hand\ side)}:
\left\{ \begin{array}{c}
v_1(-)/v_1(+) \quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(2,-10){(II)} \end{picture}\quad
t_1^{'}(-)/t_1^{'}(+) \quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(3,-10){(I)} \end{picture}\quad
z_1^{'}(-)/z_1^{'}(+), \\
v_2(-)/v_2(+) \quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(3,-10){(I)} \end{picture}\quad
u_2^{'}(-)/u_2^{'}(+) \quad \begin{picture}(20,20)(0,0)
\put(0,0) {$\longrightarrow$} \put(3,-10){(I)} \end{picture}\quad
z_2^{'}(-)/z_2^{'}(+), \\
v_3(-)/v_3(+) \quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(1,-10){(III)} \end{picture}\quad
u_3^{'}(-)/u_3^{'}(+) \quad \begin{picture}(20,20)(0,0)
\put(0,0) {$\longrightarrow$} \put(1,-10){(III)}
\end{picture}\quad t_3^{'}(-)/t_3^{'}(+),
\end{array} \right.
\label{e56}\end{aligned}$$ $$\begin{aligned}
{\rm (right-hand\ side)}:
\left\{ \begin{array}{c}
v_1(-)/v_1(+) \quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(3,-10){(I)} \end{picture}
\quad u_1^{''}(-)/u_1^{''}(+) \quad
\begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(2,-10){(II)} \end{picture} \quad
t_1^{''}(-)/t_1^{''}(+), \\
v_2^{''}(-)/v_2^{''}(+) \quad \begin{picture}(20,20)(0,0)
\put(0,0) {$\longrightarrow$} \put(3,-10){(I)}
\end{picture}\quad u_2^{''}(-)/u_2^{''}(+)
\quad \begin{picture}(20,20)(0,0) \put(0,0)
{$\longrightarrow$} \put(3,-10){(I)} \end{picture}\quad
z_2^{''}(-)/z_2^{''}(+), \\
v_3^{''}(-)/v_3^{''}(+) \quad \begin{picture}(20,20)(0,0)
\put(0,0) {$\longrightarrow$} \put(1,-10){(III)}
\end{picture}\quad t_3^{''}(-)/t_3^{''}(+) \quad
\begin{picture}(20,20)(0,0) \put(0,0) {$\longrightarrow$}
\put(1,-10){(III)} \end{picture}\quad
z_3^{''}(-)/z_3^{''}(+),
\end{array} \right.
\label{e57}\end{aligned}$$ where we use equations (I), (II) and (III), which connect [*in*]{} variable $X$ with [*out*]{} variable $Y$ in the following forms; $$\begin{aligned}
\left\{ \begin{array}{l}
{\rm (I)}:X^2+Y^2-\Gamma_1(X^2 Y^2 +1)
-\Gamma_2 X Y
+\Gamma_3 Y (1+X^2)
+\Gamma_4 X (1+Y^2)=0, \\
{\rm (II)}:X^2+Y^2-\Gamma_{5}(X^2 Y^2 +1)
-\Gamma_{6} X Y
+\Gamma_{7} Y (1+X^2)
+\Gamma_{8} X (1+Y^2)=0, \\
{\rm (III)}:X^2+Y^2-\Gamma_{5}(X^2 Y^2 +1)
-\Gamma_{6} X Y
-\Gamma_{8} Y (1+X^2)
-\Gamma_{7} X (1+Y^2)=0. \end{array} \right. \nonumber\end{aligned}$$ If the forms of Eqs. (I) and (II) are the same, Eq. (\[e53\]) is satisfied. Conditions that the forms of Eqs. (I) and (II) are the same lead $\Gamma_{i}=\Gamma_{i+4}, (i=1 \sim 4)$. We call these conditions as the candidate conditions to satisfy the Yang-Baxter equations, because these conditions do not guarantee to satisfy the Yang-Baxter equations but mean to satisfy three vectors multiplied forms of the Yang-Baxter equations. In the 16-vertex model case, the above candidate conditions lead further restrictions on the Boltzmann weights, that is, we obtain two possibilities $i)\ a+d=b+c, \ e=h, \ k=l \ $ and $ii)\ e=l, \ k=h \ $. Taking into account of these considerations to find the candidates, we consider the following exactly solvable cases, which are the more restricted cases than the above possibilities, $i)a=c,\ b=d,\ e=h,\ k=l$ and $ii)a=d,\ b=c,\ e=l,\ k=h$ in the next section.\
[**4. Exactly Solvable Cases in 16-Vertex Models**]{}
The whole regime of 16 vertex model is not yet solved exactly, and further, not yet shown to be integrable. In this situation, we suspect that the 16 vertex model will be not integrable in the whole regime. Then in the following, we will restrict the original model to more specialized cases, expecting to find integrable cases. This means that we will not cover the whole regime of the original model, but will cover the entire temperature range as we will find the whole spectrum of the transfer matrices. Furthermore, as our main poit of our paper is the relation between Yang-Baxter and pair propagation equations, we will examine some special cases of the original model in order to demonstrate the method of section 3, but we will not intend to analyze the whole regime completely.
[*i) $a=c, b=d, e=h, k=l$ case* ]{}\
In this case, we find that the Yang-Baxter equations are satisfied in a sense that there always exists a non-trivial set $\{a{''},\ ,b^{''},\ ,e^{''},\ k^{''}\}$ for given sets $\{a,\ ,b,\ ,e,\ k\},\ \{a^{'},\ ,b^{'},\ ,e^{'},\ k^{'}\}$. By explicit calculations, apparent independent Yang-Baxter equations reduce from $(32-4)=28$ to $3$ in the forms; $$\begin{aligned}
\left\{ \begin{array}{l}
e^{''}C_{\alpha}-k^{''}C_{\beta}=0 \\
(a^{''}-b^{''})C_{\beta}-e^{''}C_{\delta}=0 \\
a^{''}C_{\xi}-b^{''}C_{\eta}=0 \end{array} \right. \label{e74}\end{aligned}$$ where $C_{\alpha}=ak{'}+a^{'}k+be^{'}+b^{'}e,\
C_{\beta}=ae{'}+a^{'}e+bk^{'}+b^{'}k,\
C_{\delta}=(a-b)(a^{'}-b^{'})+(e-k)(e^{'}-k^{'}),\
C_{\eta}=aa{'}+bb^{'}+ee^{'}+kk^{'},\
C_{\xi}=ab{'}+a^{'}b+ek^{'}+e^{'}k $. Then for given sets $\{a,\ ,b,\ ,e,\ k\},\ \{a^{'},\ ,b^{'},\ ,e^{'},\ k^{'}\}$, there always exists a non-trivial set $\{a{''},\ ,b^{''},\ ,e^{''},\ k^{''}\}$, that is, the Yang-Baxter equations are always satisfied. In this case, Eqs. (\[e43\]), (\[e45\]), (\[e47\]), (\[e49\]) give $$\begin{aligned}
\left\{ \begin{array}{l}
(r_{i}^2-1)(r_{i+1}^2-1)=0, \\
(s_{i}^2-1)({s_{i}^{'}}^2-1)=0, \\
r_{i}^2+s_{i}^2-\Gamma_{9}({r_{i}}^2 {s_{i}}^2 +1)
-\Gamma_{10}r_{i}s_{i}
+\Gamma_{11}\left(r_{i}(1+s_{i}^2)-s_{i}(1+r_{i}^2)\right)
=0, \\
{s_{i}^{'}}^2+r_{i+1}^2-\Gamma_{13}({s_{i}^{'}}^2 {r_{i+1}}^2 +1)
-\Gamma_{14}r_{i}s_{i}
+\Gamma_{15}\left(s_{i}^{'}(1+r_{i+1}^2)
-r_{i+1}(1+{s_{i}^{'}}^2)\right) =0, \end{array} \right. \label{e80}\end{aligned}$$ where $$\begin{aligned}
\left\{ \begin{array}{l}
\Gamma_9=(b^2-e^2)/(a^2-k^2),
\Gamma_{10}=2(a^2+e^2-b^2-k^2)/(a^2-k^2),
\Gamma_{11}=2(bk-ae)/(a^2-k^2),\\
\Gamma_{13}=(b^2-k^2)/(a^2-e^2),
\Gamma_{14}=2(a^2+k^2-b^2-e^2)/(a^2-e^2),
\Gamma_{15}=2(be-ak)/(a^2-e^2). \end{array} \right.
\nonumber\end{aligned}$$ Solutions are the combinations of $r_{i}=s_{i}=\pm 1$ and $r_{i+1}=s_{i}^{'}=\pm 1$, (signs are independent for both). Then we obtain following cases; Choosing $p_{i}=p_{i+1}$ , eigenfunctions at this site become $$g_{i}=
\left(\begin{array}{c}
g_{i}(+) \\ \pm g_{i}(+) \\ \end{array}\right)
,\quad
g_{i}^{'}
=(a+b \pm(e+k)) g_{i}
,\quad
g_{i}^{''}=0. \nonumber$$ [*ib) $r_i=s_i=-r_{i+1}=-s_{i}^{'}=\pm 1$ case*]{} Choosing $p_{i}=p_{i+1}$ , eigenfunctions become $$g_{i}=
\left(\begin{array}{c}
g_{i}(+) \\ \pm g_{i}(+) \\ \end{array}\right) ,\quad \nonumber \\
g_{i}^{'}
=(a-b \pm(e-k))
\left(\begin{array}{c}
g_{i}(+) \\ \mp g_{i}(+) \\ \end{array}\right) ,\quad
g_{i}^{''}=0. \label{e83}$$ When we mix these cases, it is necessary to take $r_{1}=r_{N+1}$ for periodicity, but otherwise we can mix [*ia)*]{} and [*ib)*]{} in such a way as (the number of times of mixing $r_i=-r_{i+1}=1$ cases)$=$ (the number of times of mixing $r_i=-r_{i+1}=-1$ cases). General eigenvalues of transfer matrices give $$\begin{aligned}
\Lambda=(a+b+e+k)^{m_{1}}(a+b-e-k)^{m_{2}}((a-b)^2-(e-k)^2)^{m_{3}}
(\pm 1)^{m_{3}}, \label{e85}\end{aligned}$$ by using non-negative integers $m_1,m_2$ and $m_3$, where $m_{1}+m_{2}+2m_{3}=N$, and $m_1=0$ or $m_{2}=0$ must be satisfied in the case $m_3=0$. From explicit expression of transfer matrices at small $N$, we obtain $\Lambda=a+b \pm (e+k)$ for $N=1$, and $\Lambda=(a+b+e+k)^2,\ (a+b-e-k)^2,\ \pm (a-b+e-k)(a-b-e+k)$, for N=2 , which agree with the above formula.\
[*ii) $ a=d, b=c, e=l, k=h$ case* ]{}In this case, the Yang-Baxter equations are not satisfied but transfer matrix commute. We first explain why the Yang-Baxter equations are not satisfied in this case through the 8-vertex case, because the expression becomes rather complicated in the 16-vertex case, but the mechanism is the same. Then we consider this special 8-vertex case, that is, $a=d,b=c,e=h=k=l=0$and the explicit Yang-Baxter equations give $$\begin{aligned}
&& a b^{'} a^{''}+a a^{'} a^{''} =a b^{'} a^{''}+b a^{'} b{''}
\label{e86} \\
&& a b^{'} b^{''}+a a^{'} b^{''} =b a^{'} b^{''}+b b^{'} b{''}
\label{e87} \\
&&... \nonumber\end{aligned}$$ From Eqs.(\[e86\]),(\[e87\]), we obtain $(a^{'}+b^{'})(a^{''}-b^{''})=0$ , but as $a^{'}+b^{'} \ne 0$ because Boltzmann weights must be positive, and as it is in general $a^{''} \ne b^{''}$, the Yang-Baxter equations are not satisfied in general. Same mechanism happens in this 16-vertex case, and as it is in general $a+d \ne b+c,\ e+l \ne k+h $, the Yang-Baxter equations are not satisfied in this 16-vertex case. In this 16-vertex case, Eqs.(\[e43\]), (\[e45\]), (\[e47\]), (\[e49\]) give $$\begin{aligned}
\left\{ \begin{array}{l}
(r_{i}^2-1)(r_{i+1}^2-1)=0, \\
({s_{i}^{'}}^2-1)(s_{i}^2-1)=0, \\
(r_{i}^2-1)(s_{i}^2-1)=0, \\
(r_{i+1}^2-1)({s_{i}^{'}}^2-1)=0 \end{array} \right. \label{e88}\end{aligned}$$ Then we substitute solutions of Eq.(\[e88\]) into the original pair propagation and conjugate pair propagation equations, and we obtain following cases ;${iia)\ r_{i}=r_{i+1}=-s_{i}=-s_{i}^{'}=\pm 1\ case}$ Choosing $p_{i}= p_{i+1}$ , eigenfunctions at this site becomes $$\begin{aligned}
g_{i}=
\left(\begin{array}{c}
g_{i}(+) \\ \mp g_{i}(+) \\ \end{array}\right)
,\quad
g_{i}^{'}=0
,\quad
g_{i}^{''}=(a+b\mp (e+k)) g_{i} \label{e92}\end{aligned}$$ ${iib)\ r_{i}=-r_{i+1}=s_{i}=-s_{i}^{'}=\pm 1\ case}$ In this case, choosing $p_{i}= p_{i+1}$ , eigenfunctions at this site become $$\begin{aligned}
g_{i}=
\left(\begin{array}{c}
g_{i}(+) \\ \pm g_{i}(+) \\ \end{array}\right)
,\quad
g_{i}^{'}=0
,\quad
g_{i}^{''}=(-a+b\pm (e-k))
\left(\begin{array}{c}
g_{i}(+) \\ \mp g_{i}(+) \\ \end{array}\right) \label{e94}\end{aligned}$$ If we mix these cases, we obtain exactly the same formula Eq.(\[e85\]). From the explicit expression of the transfer matrix of small $N$, we obtain $\Lambda=a+b \pm (e+k)$ for $N=1$ and $\Lambda=(a+b+e+k)^2,\ (a+b-e-k)^2,\ \pm(a-b+e-k)(a-b-e+k)$ for $N=2$, which agree with this formula. [**5. Summary and Discussion**]{}\
We have clarified the connection between the Yang-Baxter and the pair propagation equations in the 16-vertex models. In the 16-vertex models, we find exactly solvable example of [*i) $a=c,b=d,e=h,k=l$* ]{} case, where the Yang-Baxter equations are satisfied. We find another exactly solvable example of [*ii) $a=d,b=c,e=l,k=h$*]{} case. By explicit calculation, we can find that conditions $ a+d=b+c,e+l=k+h$ are necessary to satisfy the Yang-Baxter equations in this case, but these are not satisfied in general, that is, the Yang-Baxter equations are not necessary condition for the solvability. Though the Yang-Baxter equations are not satisfied, we can show that transfer matrices commute (integrable) for any lattice size $N$, which will be discussed in a separate paper. In the 16-vertex models, from these exactly solvable examples, integrable cases which satisfies the Yang-Baxter equations are rather limited cases in the whole exactly solvable cases. In this sense, the pair propagation equations are more fundamental, and even if the Yang-Baxter equations are not satisfied, if the pair propagation equations are solvable, it is sufficient for our purpose to find eigenvalues of tansfer matrices.\
[**Acknowledgement**]{}\
One of the authors (K.S.) is grateful to the Special Research Fund at Tezukayama Univ. for financial support.
[\[00\]]{} M. Sato, RIMS Kokyuroku [**439**]{} (1981) 30. E. Date, M. Kashiwara, M. Jimbo and T. Miwa, in “Non-linear Integrable Systems-Classical and Quantum Theory-”, ed. M.Jimbo and T. Miwa (World Scientific,1983) p.39. V.G. Drinfeld, Soviet Math. Dokl. [**32**]{} (1985) 254; M. Jimbo, Lett.Math.Phys. [**10**]{} (1985) 63. R.J. Baxter, “Exactly Solved Models in Statistical Mechanics” (Academic Press,1982,London) B.U. Felderhof, Physica [**65**]{} (1973) 421; [*ibid*]{} [**66**]{} (1973) 279 and [**66**]{} (1973) 509; Phys.Lett. [**A44**]{} (1973) 437. M.P. Bellon, J.-M. Mailard and C.-V. Viallet, Phys.Lett. [**B281**]{} (1992) 315.
[^1]: E-mail address: ahn@benz.kotel.co.kr
[^2]: E-mail address: horibe@newton.apphy.fukui-u.ac.jp
[^3]: E-mail address: shigemot@tezukayama-u.ac.jp
|
---
abstract: 'We investigate charmonium production in Pb+Pb collisions at LHC beam energy $E_{\text {lab}}$=2.76 A TeV at fixed-target experiment ($\sqrt {s_{\text{NN}}}$=72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the anti-shadowing effect on the nuclear modification factors $R_{AA}$ and $r_{AA}$ for the $J/\psi$ yield and transverse momentum. The yield is more suppressed at less forward rapidity ($y_\text{lab}\simeq$2) than that at very forward rapidity ($y_\text{lab}\simeq$4) due to the shadowing and anti-shadowing in different rapidity bins.'
address:
- '$^1$ Physics Department, Tsinghua University and Collaborative Innovation Center of Quantum Matter, Beijing 100084, China'
- '$^2$ Institute for Theoretical Physics, Johann Wolfgang Goethe-University Frankfurt, Max-von-Laue-Strasse 1, 60438 Frankfurt am Main, Germany'
author:
- 'Kai Zhou$^{1,2}$, Zhengyu Chen$^1$, Pengfei Zhuang$^1$'
title: 'Anti-shadowing Effect on Charmonium Production at a Fixed-target Experiment Using LHC Beams'
---
Introduction {#s1}
============
Recently a fixed-target experiment using the LHC beams has been proposed [@brodsky], where the study on quarkonia in nuclear collisions becomes specifically important, due to the wide parton distributions in phase space which is helpful to reveal the charmonium production mechanism [@lansberg]. Corresponding to the LHC beam energy $E_{\text {lab}}$=2.76 A TeV, where A is the nucleon number of the incident nucleus, the center-of-mass energy $\sqrt{s_{\text{NN}}}$=72 GeV is in between the SPS and RHIC energies, and a quark-gluon plasma is expected to be created in the early stage of heavy ion collisions. Taking into account the advantage of high luminosity in fixed-target experiments, which is helpful for detailed study of rare particles, the $J/\psi$ yield in Pb+Pb collisions at $E_{\text {lab}}$=2.76 A TeV per LHC run year is about 100 times larger than the $J/\psi$ yield in Au+Au collisions at $\sqrt{s_{NN}}$=62.4 GeV per RHIC run year [@brodsky]. With the high statistics, one may precisely distinguish between different cold and hot nuclear matter effects on charmonium production [@andronic]. As is well known, the shadowing effect [@vogt; @eks], namely the difference between the parton distributions in a nucleus and in a free nucleon, depends strongly on the parton momentum fraction $x$. Since $x$ runs in a wide region, $0.001\lesssim x \lesssim 0.5$, in the fixed-target experiments, it provides a chance to see clearly the shadowing effect on the charmonium distributions in different rapidity bins. In this paper, we study the shadowing effect on the nuclear modification factors for $J/\psi$ yield and transverse momentum in Pb+Pb collisions at LHC beam energy $E_{\text{lab}}$=2.76 A TeV.
Evolution of Quark-gluon Plasma {#s2}
===============================
The medium created in heavy ion collision at $\sqrt {s_{NN}}=72$ GeV is assumed to reach local equilibrium at a proper time $\tau_0$=0.6 fm/c [@shen], its consequent space-time evolution can be controlled by the ideal hydrodynamic equations, $$\begin{aligned}
\label{hydro}
&& \partial_\mu T^{\mu\nu}=0, \nonumber\\
&& \partial_\mu j^{\mu}=0,\end{aligned}$$ where $T_{\mu\nu}=(\epsilon+p)u_{\mu}u_{\nu}-g_{\mu\nu}p$, $j_{\mu}=nu_{\mu}$, $u_{\mu}$, $\epsilon$, $p$ and $n$ are respectively the energy-momentum tensor, baryon current, four-velocity of the fluid cell, energy density, pressure and baryon density of the system. The solution of the hydrodynamic equations provides the local temperature $T(x)$, baryon chemical potential $\mu(x)$ and fluid velocity $u_\mu(x)$ of the medium which will be used in the calculation of the charmonium suppression and regeneration rates [@tang]. Taking the assumption of Hubble-like expansion and initial boost invariance along the colliding direction for high energy nuclear collisions, we can employ the well tested 2+1 dimensional version of the hydrodynamics in describing the evolution of the medium created at $\sqrt {s_{NN}}=72$ GeV. Introducing the proper time $\tau=\sqrt{t^2-z^2}$ and space-time rapidity $\eta=1/2\ln\left[(t+z)/(t-z)\right]$ instead of the time $t$ and longitudinal coordinate $z$, the conservation equations can be simplified as [@zhu] $$\begin{aligned}
\label{hydro3}
&& \partial_{\tau}E+\nabla{\bf M} = -(E+p)/{\tau}, \nonumber\\
&& \partial_{\tau}M_x+\nabla(M_x{\bf v}) = -M_x/{\tau}-\partial_xp,\nonumber\\
&& \partial_{\tau}M_y+\nabla(M_y{\bf v}) = -M_y/{\tau}-\partial_yp,\nonumber\\
&& \partial_{\tau}R+\nabla(R{\bf v}) = -R/{\tau}\end{aligned}$$ with the definitions $E=(\epsilon+p)\gamma^2-p$, ${\bf M}=(\epsilon+p)\gamma^2 {\bf v}$ and $R=\gamma n$, where ${\bf v}$ and $\gamma$ are the three-velocity of the fluid cell and Lorentz factor in the transverse plane.
To close the hydrodynamical equations one needs to know the equation of state of the medium. From recent studies on particle elliptic flow and shear viscosity, the matter created in heavy ion collisions at RHIC and LHC energies is very close to a perfect fluid [@song]. Considering that the momentum integrated particle yield, especially for heavy quarkonia, is not sensitive to the equation of state, we follow Ref. [@sollfrank] where the deconfined phase at high temperature is an ideal gas of gluons and massless $u$ and $d$ quarks plus 150 MeV massed $s$ quarks, and the hadron phase at low temperature is an ideal gas of all known hadrons and resonances with mass up to 2 GeV [@pdg]. There is a first order phase transition between these two phases. In the mixed phase, the Maxwell construction is used. The mean field repulsion parameter and the bag constant are chosen as $K$=450 MeV fm$^3$ and $B^{1/4}$=236 MeV to obtain the critical temperature $T_c=165$ MeV [@sollfrank] at vanishing baryon number density. Note that, when one calculates the rapidity or transverse momentum distribution of quarkonia, the choice of the equation of state may result in sizeable difference.
The initialization of the hot medium is taken as the same treatment in Ref. [@zhu]. We use the final charged multiplicity to determine the initial entropy density. For $\sqrt{s_{\text {NN}}}$=72 GeV, the charged multiplicity at central rapidity in center-of-mass frame is estimated to be $dN_{\text {ch}}/d\eta=515$ based on the empirical formula [@kestin]: $$\frac{dN_{\text{ch}}}{d\eta}=312.5\log_{10}\sqrt{s_{\text{NN}}}-64.8.
\label{charged}$$ The initial baryon density is obtained by adjusting the entropy per baryon to be 250 [@kolb]. From the empirical relation $\sigma_{NN}= 29.797+0.141(\ln\sqrt {s_{NN}})^{2.624}$ [@hikasa] between the inelastic nucleon-nucleon cross section $\sigma_{NN}$ in unit of mb and the colliding energy $\sqrt {s_{NN}}$ in unit of GeV, we have $\sigma_{NN}=36$ mb at $\sqrt {s_{NN}}$=72 GeV. These initial conditions lead to a maximum medium temperature $T_0$=310 MeV at the initial time $\tau_0$=0.6 fm/c. The medium maintains local chemical and thermal equilibrium during the evolution. If we do not consider the charmonium interaction with the hadron gas, the charmonium distributions in the final state will be fixed at time $\tau_c$ corresponding to the critical temperature $T_c$ of the deconfinement phase transition.
Charmonium Transport in Quark-gluon Plasma {#s3}
==========================================
Since a charmonium is so heavy, its equilibrium with the medium can hardly be reached, we use a Boltzmann transport equation to describe its phase space distribution function $f_\Psi(x,{\bf p}|{\bf b})$ in heavy ion collisions at impact parameter ${\bf b}$, $$p^\mu \partial_\mu f_\Psi = - C_\Psi f_\Psi + D_\Psi,
\label{trans1}$$ where the loss and gain terms $C_\Psi(x,{\bf p}|{\bf b})$ and $D_\Psi(x,{\bf p}|{\bf b})$ come from the charmonium dissociation and regeneration in the created hot medium. We have neglected here the elastic scattering, since the charmonium mass is much larger than the typical medium temperature. Considering that the feed-down from the excited states $\psi'$ and $\chi_c$ to the ground state $J/\psi$ [@zoccoli] happens after the medium evolution, we should take transport equations for $\Psi=J/\psi,\ \psi'$ and $\chi_c$ when we calculate the $J/\psi$ distribution $f_{J/\psi}$ in the final state.
Introducing the momentum rapidity $y=1/2\ln\left[(E+p_z)/(E-p_z)\right]$ and transverse energy $E_t=\sqrt {E^2-p_z^2}$ to replace the longitudinal momentum $p_z$ and energy $E=\sqrt{m^2+{\bf
p}^2}$, the transport equation can be rewritten as $$\left[\cosh(y-\eta)\partial_\tau+{\sinh(y-\eta)\over \tau}\partial_
\eta+{\bf v}_t\cdot\nabla_t\right]f_\Psi=- \alpha_\Psi f_\Psi+\beta_\Psi
\label{trans2}$$ with the dissociation and regeneration rates $\alpha_\Psi(x,{\bf
p}|{\bf b}) = C_\Psi(x,{\bf p}|{\bf b})/E_t$ and $\beta_\Psi(x,{\bf p}|{\bf b}) = D_\Psi(x,{\bf p}|{\bf b})/E_t$, where the third term in the square bracket arises from the free streaming of $\Psi$ with transverse velocity ${\bf v}_t={\bf p}_t/E_t$ which leads to a strong leakage effect at SPS energy [@hufner].
Considering the gluon dissociation $\Psi + g \to c+\bar c$ in the quark-gluon plasma, the dissociation rate $\alpha$ can be expressed as $$\label{loss}
\alpha_\Psi=\frac{1}{2E_t}\int{d^3{\bf k}\over (2\pi)^3
2E_g}\sigma_{g\Psi}({\bf p},{\bf k},T)4F_{g\Psi}({\bf p},{\bf k})f_g({\bf k},T,u_\mu),$$ where $E_g$ is the gluon energy, $F_{g\Psi}=\sqrt{(p k)^2-m_\Psi^2m_g^2}=p k$ the flux factor, and $f_g$ the gluon thermal distribution as a function of the local temperature $T(x|{\bf b})$ and fluid velocity $u_\mu(x|{\bf b})$ determined by the hydrodynamics. The dissociation cross section in vacuum $\sigma_{g\Psi}({\bf p},{\bf k},0)$ can be derived through the operator production expansion (OPE) method with a perturbative Coulomb wave function [@bhanot; @arleo; @oh; @wang]. However, the method is no longer valid for loosely bound states at high temperature. To reasonably describe the temperature dependence of the cross section, we take the geometric relation between the averaged charmonium size and the cross section, $$\label{crosssection}
\sigma_{g\Psi}({\bf p},{\bf k},T)={\langle r^2\rangle_\Psi(T)\over \langle r^2\rangle_\Psi(0)}\sigma_{g\Psi}({\bf p},{\bf k},0).$$ The averaged radial square $\langle r^2\rangle_\Psi(T)$ is calculated via potential model [@satz] with lattice simulated heavy quark potential [@petreczky] at finite temperature. When $T$ approaches to the charmonium dissociation temperature $T_d$, the averaged radius square and in turn the cross section go to infinity, which means a complete charmonium melting induced by color screening [@matsui]. Using the internal energy $U$ as the heavy quark potential $V$, the dissociation temperautre $T_d$ is calculated to be $2.1T_c, 1.16T_c$ and $1.12T_c$ for $J/\psi, \chi_c$ and $\psi'$, respectively [@satz].
The regeneration rate $\beta$ is connected to the dissociation rate $\alpha$ via the detailed balance between the gluon dissociation process and its inverse process [@thews; @yan]. To obtain the regeneration rate, we also need the charm quark distribution function in medium. Although the initially produced charm quarks would carry high transverse momentum, they lose energy (momentum) when passing through the medium. Considering the experimentally observed large open charm quench factor [@star1; @star2; @alice1] and elliptic flow [@phenix; @alice3], we take as a first approximation a kinetically thermalized momentum spectrum for the charm quark distribution $f_c(x,{\bf q}|{\bf b})$. Neglecting the creation and annihilation of charm-anticharm pairs inside the medium, the spacial density of charm quark number $\rho_c(x|{\bf b})=\int d^3{\bf q}/(2\pi)^3f_c(x,{\bf q}|{\bf b})$ satisfies the conservation law $$\partial_\mu\left(\rho_c u^\mu\right)=0$$ with the initial density determined by the nuclear geometry $\rho_c(x_0|{\bf b})=T_A({\bf x}_t)T_B({\bf x}_t-{\bf b})\cosh\eta/\tau_0 d\sigma^\text{NN}_{c\bar c}/ d\eta$, where $T_{A,B}({\bf x}_t)=\int_{-\infty}^{+\infty}\rho_{A,B}(\vec{r}) dz$ are the thickness functions, and $d\sigma^\text{NN}_{c\bar c}/d\eta$ is the charm quark rapidity distribution in p+p collisions.
For the regeneration rate $\beta$, we also considered the canonical effect which is shown to be important in explaining the suppression of strange mesons [@ko]. When there are only few pairs or even less than one pair of charm quarks produced in an event, one need to consider the canonical effect to guarantee the exact charm number conservation. Taking into account the fact that the charm and anti-charm quarks inside a pair are produced at the same rapidity, we simply multiply the regeneration rate $\beta$ in a unit rapidity bin by a canonical enhancement factor [@liu] $$\label{canonical}
C_{c\bar c}=1+1/(dN_{c\bar c}/dy).$$
To take into account the relativistic effect on the dissociation cross section to avoid the divergence in the regeneration cross section, we should replace the charmonium binding energy by the gluon threshold energy in the calculations of $\alpha$ and $\beta$ [@polleri].
In the hadron phase of the fireball with temperature $T<T_c$, there are many effective models that can be used to calculate the inelastic cross sections between charmonia and hadrons [@barnes]. For $J/\psi$ the dissociation cross section is about a few mb which is comparable with the gluon dissociation cross section. However, considering that the hadron phase appears in the later evolution of the fireball, the ingredient density of the system is much more dilute in comparison with the early hot and dense period [@tang]. Taking, for instance, the regeneration processes $c+\bar c \to g+J/\psi$ in quark matter and $D+\bar D^*\to \pi +J/\psi$ in hadron matter, the density ratio between charm quarks at initial temperature $T_0=310$ MeV and $D$ mesons at critical temperature $T_c=165$ MeV is around $30$. Considering further the life time of the quark matter $\sim 6$ fm/c and the life time of the hadron matter $\sim 2$ fm/c calculated from the hydrodynamics in Section \[s2\], we neglect the charmonium production and suppression in hadron gas, to simplify the numerical calculations. Note that, the suppression and regeneration in hadron gas may become important for excited charmonium states [@du].
The transport equation can be solved analytically with the explicit solution [@tang; @liu4] $$\begin{aligned}
\label{solution}
f_\Psi\left({\bf p}_t,y,{\bf
x}_t,\eta,\tau\right)&=&f_\Psi\left({\bf p}_t,y,{\bf
X}_t(\tau_0),H(\tau_0),\tau_0\right)\nonumber\\
&\times& e^{-\int^{\tau}_{\tau_0}{d\tau'\over \Delta(\tau')}
\alpha_\Psi\left({\bf p}_t,y,{\bf X}_t(\tau'),H(\tau'),\tau'\right)}\nonumber\\
&+&\int^{\tau}_{\tau_0}{d\tau'\over \Delta(\tau')} \beta_\Psi\left({\bf
p}_t,y,{\bf
X}_t(\tau'),H(\tau'),\tau'\right)\nonumber\\
&\times& e^{-\int^{\tau}_{\tau'}{d\tau''\over \Delta(\tau'')}\alpha_\Psi\left({\bf
p}_t,y,{\bf
X}_t(\tau''),H(\tau''),\tau''\right)}\end{aligned}$$ with $$\begin{aligned}
\label{xh}
&& {\bf X}_t(\tau')={\bf x}_t-{\bf
v}_T\left[\tau\cosh(y-\eta)
-\tau'\Delta(\tau')\right],\nonumber\\
&& H(\tau')=y-\arcsin\left(\tau/\tau' \sinh(y-\eta)\right),\nonumber\\
&&
\Delta(\tau')=\sqrt{1+(\tau/\tau')^2 \sinh^2(y-\eta)}.\end{aligned}$$ The first and second terms on the right-hand side of the solution (\[solution\]) indicate the contributions from the initial production and continuous regeneration, respectively, and both suffer from the gluon dissociation in the medium. Since the regeneration happens in the deconfined phase, the regenerated quarkonia would have probability to be dissociated again by the surrounding gluons. The coordinate shifts ${\bf x}_t \to {\bf
X}_t$ and $\eta \to H$ in the solution (\[solution\]) reflect the leakage effect in the transverse and longitudinal directions.
For fixed-target nuclear collisions at $E_\text{lab}$=2.76 A TeV, the collision time for the two Pb nuclei to pass through each other in the center of mass frame is $2R_{\text{Pb}}m_\text{N}/(\sqrt{s_\text{NN}}/2)\sim 0.35$ fm/c, which is compatible with the charmonium formation time but shorter than the QGP formation time $\tau_0=0.6$ fm. Therefore, all the cold nuclear matter effects can be reflected in the initial charmonium distribution $f_\Psi$ at time $\tau_0$. We take into account nuclear absorption, nuclear shadowing and Cronin effect. The initial distribution in the solution (\[solution\]) can be obtained from a superposition of p+p collisions, along with the modifications from these cold nuclear matter effects.
The nuclear absorption is important in explaining the $J/\psi$ suppression in p+A and A+A collisions at low energies. It is due to the inelastic collision between the initially produced charmonia and the surrounding nucleons, and its effect on the charmonium surviving probability can be described by an effective absorption cross section $\sigma_\text{abs}$. The value of $\sigma_\text{abs}$ is usually measured in p+A collisions and is several mb at SPS energy. Since the nuclear absorption becomes weaker at higher colliding energy due to the shorter collision time[@capella; @lourenco], we take $\sigma_\text{abs}$=2 mb at $E_\text{lab}$=2.76 A TeV [@lourenco] and the nuclear absorption factor $$\label{abs}
S_\text{abs}=e^{-\sigma_\text{abs}\left(\int^{\infty}_{z_A}\rho(z,{\bf x_t}) dz + \int^{z_B}_{-\infty}\rho(z,{\bf x_t-b}) dz\right)}.$$
The Cronin effect broadens the momentum distribution of the initially produced charmonia in heavy ion collisions [@tang]. In p+A and A+A collisions, the incoming partons (both gluons and quarks) experience multiple scatterings with surrounding nucleons via soft gluon exchanges. The initial scatterings lead to an additional transverse momentum broadening of partons which is then inherited by produced hadrons [@esumi]. Since the Cronin effect is caused by soft interactions, rigorous calculations for the effect are not available. However, the effect is often treated as a random motion. Inspired from a random-walk picture, we take a Gaussian smearing [@zhao; @liu2] for the modified transverse momentum distribution $$\label{cronin}
\overline
f^\text{NN}_\Psi({\bf x},{\bf p},z_A,z_B|{\bf b})={1\over \pi a_{gN} l} \int
d^2{\bf p}_t' e^{-{\bf p}_t^{'2}\over a_{gN} l}f^\text{NN}_\Psi(|{\bf
p}_t-{\bf p}_t'|,p_z)S_\text{abs},$$ where $$l({\bf x},z_A,z_B|{\bf b})=\frac{1}{\rho} \left(\int_{-\infty}^{z_A}\rho(z,{\bf x_t}) dz + \int_{z_B}^{+\infty}\rho(z,{\bf x_t-b}) dz\right)
\label{ll}$$ is the path length of the two initial gluons in nuclei before fusing into a charmonium at ${\bf x}$, $z_A$ and $z_B$, $a_{gN}$ is the averaged charmonium transverse momentum square gained from the gluon scattering with a unit of length of nucleons, and $f^\text{NN}_\Psi({\bf p})$ is the charmonium momentum distribution in a free p+p collision. The Cronin parameter $a_{gN}$ is usually extracted from corresponding p+A collisions. Considering the absence of p+A collision data at $\sqrt{s_\text{NN}}$= 72 GeV, we take $a_{gN}$=0.085 (GeV/c)$^2$/fm from some empirical estimations [@thews; @wang2; @vogt]. As a comparison, for collisions at SPS ($\sqrt{s_\text{NN}} \sim 20$ GeV) and RHIC ($\sqrt{s_\text{NN}}=200$ GeV) we take $a_{gN}=0.075$ [@zhu] and 0.1 [@liu3] (GeV/c)$^2$/fm, respectively.
Assuming that the emitted gluon in the gluon fusion process $g+g\to
\Psi+g$ is soft in comparison with the initial gluons and the produced charmonium and can be neglected in kinematics, the charmonium production becomes a $2\to 1$ process approximately, and the longitudinal momentum fractions of the two initial gluons are calculated from the momentum conservation, $$\label{x}
x_{1,2}={\sqrt{m_\Psi^2+p_t^2}\over \sqrt{s_\text{NN}}} e^{\pm y}.$$ The free distribution $f_\Psi^\text{NN}({\bf p})$ can be obtained by integrating the elementary partonic process, $$\label{fg}
{d\sigma_\Psi^\text{NN}\over dp_tdy}= \int dy_g x_1 x_2 f_g(x_1,\mu_F)
f_g(x_2,\mu_F) {d\sigma_{gg\to\Psi g}\over d\hat t},$$ where $f_g(x,\mu_F)$ is the gluon distribution in a free proton, $y_g$ the emitted gluon rapidity, $d\sigma_{gg\to\Psi g}/ d\hat t$ the charmonium momentum distribution produced from a gluon fusion process, and $\mu_F$ the factorization scale of the fusion process.
Now we consider the shadowing effect. The distribution function $\overline
f_i(x,\mu_F)$ for parton $i$ in a nucleus differs from a superposition of the distribution $f_i(x,\mu_F)$ in a free nucleon. The nuclear shadowing can be described by the modification factor $R_i=\overline f_i/(Af_i)$. To account for the spatial dependence of the shadowing in a finite nucleus, one assumes that the inhomogeneous shadowing is proportional to the parton path length through the nucleus [@klein], which amounts to consider the coherent interaction of the incident parton with all the target partons along its path length. Therefore, we replace the homogeneous modification factor $R_i(x,\mu_F)$ by an inhomogeneous one [@vogt2] $${\cal R}_i(x,\mu_F,{\bf x}_t)=1+A\left(R_i(x,\mu_F)-1\right)T_A({\bf x}_t)/T_{AB}(0)$$ with the definition $T_{AB}({\bf b})=\int d^2{\bf x}_t T_A({\bf x}_t) T_B({\bf x}_t-{\bf b})$. We employ in the following the EKS98 package [@eks] to evaluate the homogeneous ratio $R_i$, and the factorization scale is taken as $\mu_F=\sqrt{m_\Psi^2+p_t^2}$.
Replacing the free distribution $f_g$ in (\[fg\]) by the modified distribution $\overline f_g=Af_g{\cal R}_g$ and then taking into account the Cronin effect (\[cronin\]), we finally get the initial charmonium distribution for the solution (\[solution\]), $$\begin{aligned}
\label{initial}
f_\Psi(x_0,{\bf p}|{\bf b})&=&{(2\pi)^3\over E_t\tau_0}\int dz_Adz_B\rho_A({\bf x}_t,z_A)\rho_B({\bf x}_t,z_B)\nonumber\\
&\times&{\cal R}_g(x_1,\mu_F,{\bf x}_t){\cal R}_g(x_2,\mu_F,{\bf x}_t-{\bf b})\nonumber\\
&\times& \overline f_\Psi^\text{NN}({\bf x},{\bf p},z_A,z_B|{\bf b})S_{abs}.\end{aligned}$$ Now the only thing left is the distribution $f_\Psi^\text{NN}$ in a free p+p collision which can be fixed by experimental data or some model simulations.
Numerical Results {#s4}
=================
The beam energy $E_\text{lab}$= 2.76 A TeV in fixed target experiments corresponds to a colliding energy $\sqrt{s_\text{NN}}$=72 GeV, and the rapidity in the center-of-mass frame is boosted in the laboratory frame with a rapidity shift $\Delta y=\tanh^{-1}\beta_\text{cms}= 4.3$. Let us first focus on the central rapidity region around $y_\text{cms}=$ 0 in the center-of mass frame, which corresponds to $y_\text{lab}=$ 4.3 in the laboratory frame. The centrality and momentum dependent anti-shadowing for initially produced charmonia is reflected in the inhomogeneous modification factor ${\cal R}_g$ for gluons. The longitudinal momentum fractions are $x_{1,2}=\sqrt{m^2_{\Psi}+p^2_t}/\sqrt{s_\text{NN}}\sim 0.05$ for the two gluons, which is located at the strong anti-shadowing region [@dias] by some parametrization of parton distribution shadowing like EKS98 [@eks], EPS08 [@eps08] and EPS09 [@eps09]. The anti-shadowing changes not only the gluon distribution but also the charm quark production cross section used in the regeneration. For the process $g+g\to c+\bar c$, the anti-shadowing for gluons leads to an anti-shadowing factor $\sim ({\cal R}_g)^2$ for the cross section. Considering that in peripheral collisions the regeneration is weak and its contribution is not remarkably affected by the anti-shadowing, we take a centrality averaged anti-shadowing factor for the cross section to simplify the numerical calculation for regeneration. Estimated from the EKS98 evolution [@eks], we take a $20\%$ enhancement of the charm quark production cross section compared to free p+p collisions. From FONLL calculation [@fonll], the upper limit for $d\sigma_{c\bar c}^\text{NN}/dy$ is 0.047 mb at $\sqrt{s_\text{NN}}$=62.4 GeV. Note that the experimental data for charm quark cross section in free p+p collisions are close to the upper limit of perturbative calculation, we take $d\sigma_{c\bar c}^\text{NN}/dy=0.05$ mb at $\sqrt{s_\text{NN}}$=72 GeV. After taking into account the anti-shadowing effect in A+A collisions, it becomes 0.06 mb. For p+p collisions, we assume a constant hidden to open charm ratio $(d\sigma_{\Psi}/dy)/(d\sigma_{c\bar c}/dy)$=const at any colliding energy. From the ratio extracted from the RHIC data [@adare], we have $d\sigma_{J/\psi}/dy$=0.35 $\mu b$ at $\sqrt{s_\text{NN}}$=72 GeV. The transverse momentum distribution for $J/\psi$ in free p+p collisions can be simulated by PYTHIA [@pythia] and the mean transverse momentum square is $\langle p_t^2\rangle_\text{pp}=2.7$ (GeV/c)$^2$.
![(Color online) The centrality dependence of the $J/\psi$ nuclear modification factor $R_{AA}$ at very forward rapidity $y_\text{lab}=4.3$ ($y_\text{cms}$=0) in Pb+Pb collisions at LHC beam energy $E_\text{lab}$=2.76 A TeV. The hatched band is the model result with the upper and lower borders corresponding to the calculations with and without anti-shadowing effect. The RHIC data [@rhic2] are for Au+Au collisions at $y_\text{cms}$=0. []{data-label="fig1"}](fig1){width="45.00000%"}
Fig.\[fig1\] shows our calculated centrality dependence of $J/\psi$ nuclear modification factor $R_{AA}=N_\Psi^{AA}/\left(N_\text{coll}N_\Psi^{pp}\right)$ in Pb+Pb collisions at LHC beam energy $E_\text{lab}$=2.76 A TeV in laboratory frame ($\sqrt{s_\text{NN}}$=72 GeV in center-of-mass frame) at forward rapidity $y_\text{lab}=4.3$ (central rapidity $y_\text{cms}$=0), where $N_\Psi^{pp}$ and $N_\Psi^{AA}$ are charmonium yields in p+p and A+A collisions, and $N_\text{coll}$ and $N_\text{part}$ are numbers of binary collisions and participants. For comparison, we show also the RHIC data at $\sqrt {s_{NN}}=62.4$ GeV [@rhic2] at central rapidity. Since the shadowing/anti-shadowing effect is still an open question, and its degree depends strongly on the models we used, we show in Fig.\[fig1\] two calculations for the total $J/\psi$ $R_{AA}$ in Pb+Pb collisions at $\sqrt{s_\text{NN}}$=72 GeV, one is with the above discussed anti-shadowing, and the other is without anti-shadowing. The hatched band is due to this uncertainty in the anti-shadowing. With increasing collision centrality, the initial contribution drops down, while the regeneration goes up. The canonical effect is important in peripheral collisions where the number of charm quark pairs is less than one and the inclusion of the canonical effect enhances sizeably the charmonium yield. In most central collisions, the regeneration can contribute about $25\%$ to the total charmonium yield. The anti-shadowing at very forward rapidity in the laboratory frame (central rapidity in the center-of-mass frame) enhances the charm quark cross section and in turn the initial charmonium yield by a factor of 1.2. As a consequence, the enhancement factor for the regenerated charmonium number is $1.2^2=1.44$ which leads to a strong charmonium enhancement! If we do not consider the anti-shadowing effect on the charmonium regeneration and initial production, the total $R_{AA}$ is significantly reduced.
![(Color online) The centrality dependence of the $J/\psi$ nuclear modification factor $r_{AA}$ at forward rapidity $y_\text{lab}$=4.3 $(y_\text{cms}$=0) in Pb+Pb collisions at LHC beam energy $E_\text{lab}$=2.76 A TeV. The upper and lower borders of the band correspond to the calculations with and without anti-shadowing effect. []{data-label="fig2"}](fig2){width="45.00000%"}
To see more clearly the charmonium production mechanism, we turn to the transverse momentum information. In Fig.\[fig2\] we show the $J/\psi$ nuclear modification factor [@zhou] $$r_{AA}={\langle p_t^2\rangle_{AA}\over \langle p_t^2\rangle_{pp}}$$ in Pb+Pb collisions at beam energy $E_{\text{lab}}$=2.76 A TeV, where $\langle p_t^2\rangle_{AA}$ and $\langle p_t^2\rangle_{pp}$ are averaged $J/\psi$ transverse momentum square in Pb+Pb and p+p collisions at very forward rapidity $y_\text{lab}$=4.3. If we neglect the contribution from the regeneration and consider only the initial production, the ratio $r_{AA}$ goes up monotonously with centrality due to the Cronin effect and leakage effect [@zhou]. The inclusion of regeneration (upper border of the band) remarkably reduces the averaged transverse momentum, because the regenerated charmonia possess a soft momentum distribution induced by the charm quark energy loss. Since the degree of regeneration increases with centrality, the increased soft component leads to a decreasing $r_{AA}$ in most central collisions. The canonical effect can reduce the $r_{AA}$ further, since it enhances the regeneration especially in peripheral collisions. However, we should note that, the assumption of charm quark thermalization indicates a full energy loss and it may not be reached in peripheral and semi-central collisions at beam energy $E_\text{lab}$=2.76 A TeV. When we switch off the anti-shadowing (lower border of the band), both the hard component controlled by the initial production and the soft component dominated by the regeneration would be reduced. Considering that the enhancement factor resulted from the anti-shadowing is $1.2$ for the initial production but $1.2^2$ for the regeneration, the stronger anti-shadowing in the soft component leads to the slight difference between with and without considering the anti-shadowing, shown in Fig.\[fig2\]. It is obvious that compared to the nuclear modification factor $R_{AA}$ for the yield, the modification factor $r_{AA}$ for the transverse momentum is less sensitive to the shadowing effect [@zhou].
![(Color online) The centrality dependence of the double ratios $R_{AA}^{y_\text{lab}=4.3}/R_{AA}^{y_\text{lab}=2.3}$ and $r_{AA}^{y_\text{lab}=4.3}/r_{AA}^{y_\text{lab}=2.3}$ for $J/\psi$ yield and transverse momentum in Pb+Pb collisions at LHC beam energy $E_\text{lab}$=2.76 A TeV. The upper and lower borders of the two bands correspond to the calculations with and without shadowing and anti-shadowing effects. []{data-label="fig3"}](fig3){width="45.00000%"}
From the simulations of parton distributions in cold nuclear matter [@eks; @eps08; @eps09], the nuclear shadowing region is located at very small $x$. In the following we consider the shadowing and see its difference from the anti-shadowing in $J/\psi$ $R_{AA}$ and $r_{AA}$ in fixed-target Pb+Pb collisions. The maximum $J/\psi$ rapidity in the center-of-mass frame is $y_\text{cms}^\text{max}=\cosh^{-1}\left[\sqrt{s_\text{NN}}/\left(2m_{J/\psi}\right)\right]$=3.13 at $\sqrt{s_\text{NN}}$=72 GeV. Considering the expected amount of measured events, we focus on the backward rapidity region around $y_\text{cms}=-2$ which corresponds to the less forward rapidity $y_\text{lab}=\Delta y+y_\text{cms}=4.3-2=2.3$ in laboratory frame. From the kinematics, the momentum fractions for the two gluons involved in the gluon fusion process are $x_1=(\sqrt{m_\Psi^2+p_t^2}/\sqrt{s_\text{NN}})e^2 = 0.35$ and $x_2=(\sqrt{m_\Psi^2+p_t^2}/\sqrt{s_\text{NN}})e^{-2} = 0.006$. One is located in the EMC region and the other in the shadowing region [@eks; @eps08; @eps09], leading to a reduction of $15\%$ for the charm quark production cross section from EKS98 evolution [@eks] ($20\%$ from EPS09 NLO evolution [@eps09]). Taking the same ratio of charm quark cross section between $y_\text{cms}=-2$ and $y_\text{cms}=0$ calculated from FONLL [@fonll] and including the $15\%$ shadowing reduction, we obtain $d\sigma_{c\bar c}^\text{NN}/dy$=0.01 mb at $y_\text{cms}$=-2. For the medium evolution at this backward rapidity region, we initialize the entropy density to be half of that at central rapidity [@shen; @na50] which leads to a maximum temperature of $T_0$=245 MeV. Fig.\[fig3\] shows the two double ratios $R_{AA}^{y_\text{lab}=4.3}/R_{AA}^{y_\text{lab}=2.3}$ and $r_{AA}^{y_\text{lab}=4.3}/r_{AA}^{y_\text{lab}=2.3}$ of $J/\psi$, the upper and lower borders of the two bands correspond to the calculations with and without considering the nuclear shadowing and anti-shadowing. While the double ratio for the transverse momentum is not sensitive to the shadowing and anti-shadowing, as we discussed above, the strong anti-shadowing at $y_\text{lab}=4.3$ and shadowing at $y_\text{lab}=2.3$ leads to a strong enhancement of the double ratio for the yield. Without considering the shadowing and anti-shadowing, the stronger charmonium suppression in the hotter medium at $y_\text{lab}=4.3$ ($T_0$=310 MeV) compared with the weaker suppression in the relatively colder medium at $y_\text{lab}=2.3$ ($T_0$=245 MeV) makes the double ratio less than unit. However, the inclusion of the yield enhancement due to the anti-shadowing at $y_\text{lab}=4.3$ and the yield suppression due to the shadowing at $y_\text{lab}=2.3$ changes significantly the behavior of the double ratio, it becomes larger than unit and can reach 1.3 in most central collisions. Note that the rapidity dependent shadowing effect was used to qualitatively interpret the stronger suppression at forward rapidity than that at midrapidity in Au+Au collisions at RHIC [@frawley; @ferreiro].
Summary {#s5}
=======
We investigated with a transport approach the charmonium production in fixed-target Pb+Pb collisions at LHC beam energy $E_\text{lab}$=2.76 A TeV. We focused on the rapidity dependent shadowing effect on the nuclear modification factors for the charmonium yield and transverse momentum. While the averaged transverse momentum is not sensitive to the shadowing effect, the anti-shadowing leads to a strong yield enhancement at very forward rapidity $y_\text{lab}\simeq$ 4, and the shadowing results in a strong yield suppression at less forward rapidity $y_\text{lab}\simeq$2. The double ratio between the nuclear modification factors $R_{AA}$ in the two rapidity regions amplifies the shadowing effect, it is larger than unit and can reach 1.3 in most central collisions.
From the model studies on gluon distribution in nuclei, see for instance Refs. [@eks; @dias; @eps08; @eps09], there are large uncertainties in the domain of large $x\ (>0.1)$, which is probably due to the unknown EMC effect. From our calculation here, the double ratio of the nuclear modification factor for $J/\psi$ yield is very sensitive to the gluon shadowing effect in different $x$ region. A precise measurement of the ratio may provide a sensitive probe to the gluon distribution.\
\
[**Acknowledgement:**]{} The work is supported by the NSFC under grant No.11335005 and the MOST under grant Nos.2013CB922000 and 2014CB845400.
[99]{} S.J.Brodsky, F.Fleuret, C.Hadjidakis and J.P.Lansberg, Phys. Rept. [**522**]{}, 239(2013). J.P.Lansberg, S.J.Brodsky, F.Fleuret and C.Hadjidakis, Few Body Syst. [**53**]{}, 11(2012). A.Andronic [*et al.*]{}, arXiv:1506.03981. R.Vogt, Int. J. Mod. Phys. [**E12**]{}, 211(2003). K.J.Eskola, V.J.Kolhinen and C.A.Salgado, Eur. Phys. J. [**C9**]{}, 61(1999). C.Shen and U.Heinz, Phys. Rev. [**C85**]{}, 054902(2012). Z.Tang, N.Xu, K.Zhou and P.Zhuang, J. Phys. [**G41**]{},124006(2014). X.Zhu, P.Zhuang and N.Xu, Phys. Lett. [**B607**]{}, 107(2005). H.Song, S.Bass, U.Heinz, T.Hirano, and C.Shen, Phys. Rev. Lett. [**106**]{}, 192301(2011) and [**109**]{}, 139904(2012). J.Sollfrank [*et al.*]{}, Phys. Rev. [**C55**]{}, 392(1997). K.Hagiwara [*et al.*]{}, Particle Data Group, Phys. Rev. [**D66**]{}, 010001(2002). G.Kestin and U.Heinz, Eur. Phys. J. [**C61**]{}, 545(2009). P.Kolb and R.Rapp, Phys. Rev. [**C 67**]{},044903(2003). K.Hikasa [*et al.*]{}, Phys. Rev. [**D45**]{}, S1(1992). A.Zoccoli [*et al.*]{}, \[HERA-B Collaboration\], Eur. Phys. J. [**C43**]{}, 179(2005). J.Hufner and P.Zhuang, Phys. Lett. [**B559**]{}, 193(2003). G.Bhanot and M.E.Peskin, Nucl. Phys. [**B156**]{}, 365(1979); [*ibid*]{}, 391(1979). F.Arleo [*et al.*]{}, Phys. Rev. [**D65**]{}, 014005(2002). Y.S.Oh, H.C.Kim and S.H.Lee, Phys. Rev. [**C65**]{}, 067901(2002). X.N.Wang, Phys. Lett. [**B540**]{}, 62(2002). H.Satz, J. Phys. [**G32**]{}, R25(2006). P.Petreczky, J. Phys. [**G37**]{}, 094009(2010). T.Matsui and H.Satz, Phys. Lett. [**B178**]{}, 417(1986). R.L.Thews and M.L.Mangano, Phys. Rev. [**C73**]{}, 014904(2006). L.Yan, P.Zhuang and N.Xu, Phys. Rev. Lett. [**97**]{}, 232301(2006). B.I.Abelev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett. [**98**]{}, 192301(2007). L.Adamczyk [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett. [**113**]{}, 142301(2014). B.Abelev [*et al.*]{} \[ALICE Collaboration\], JHEP 09(2012)112. A.Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett. [**98**]{}, 172301(2007). B.Abelev [*et al.*]{} \[ALICE Collaboration\], Phys. Rev. Lett. [**111**]{}, 102301(2013). C.Ko [*et al.*]{}, Phys. Rev. Lett [**86**]{}, 5438(2001). Y.Liu, C.Ko and T.Song, Phys. Lett. [**B728**]{}, 437(2014). A.Polleri [*et al.*]{}, Phys. Rev. [**C70**]{}, 044906(2004). T.Barnes [*et al.*]{}, Phys. Rev. [**C68**]{}, 014903(2003) and references therein. X.Du and R.Rapp, arXiv:1504.00670. Y.Liu, Z.Qu, N.Xu and P.Zhuang, J. Phys. [**G37**]{}, 075110(2010). A.Capella [*et al.*]{}, Phys. Rev. [**C76**]{}, 064906(2007). C.Lourenco, R.Vogt, H.K.Woehri, JHEP[**0902**]{}, 014(2009). S.Esumi, U.Heinz, and N.Xu, Phys. Lett. [**B403**]{}, 145(1997). X.Zhao and R.Rapp, Phys. Lett. [**B664**]{}, 253(2008). Y.Liu [*et al.*]{}, Phys. Lett. [**B697**]{}, 32(2011). X.N.Wang, Phys. Rev. Lett. [**81**]{}, 2655(1998). Y.Liu [*et al.*]{}, J. Phys. [**G36**]{}, 064057(2009). S.R.Klein and R.Vogt, Phys. Rev. Lett. [**91**]{}, 142301(2003). R.Vogt, Phys. Rev. [**C71**]{}, 054902(2005). J.Dias de Deus, Phys. Lett. [**B335**]{}, 188(1994). K.Eskola, H.Paukkunen, C.Salgado, JHEP, 0807:102(2008). K.Eskola, H.Paukkunen, C.Salgado, JHEP, 0904:065(2009). M.Cacciari, M.Greco, and P.Nason, JHEP, 9805:007(1998). A.Adare [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 232301(2007). T.Sjostrand [*et al.*]{}, Comput. Phys. Commun. [**135**]{}, 238(2001). A.Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. [**C86**]{}, 064901(2012). K.Zhou [*et al.*]{}, Phys. Rev. [**C89**]{}, 054911(2014). M.C.Abreu [*et al.*]{} \[NA50 Collaboration\], Phys. Lett. [**B530**]{}, 43(2002). A.Frawley, T.Ullrich, and R.Vogt, Phys. Rep. [****]{}462, 125(2008). E.Ferreiro, F.Fleuret, J.Lansberg, A.Rakotozafindrabe, Phys. Lett. [**B680**]{}, 50(2009).
|
---
abstract: 'We investigate an inertial viscosity-type Tseng’s extragradient algorithm with a new step size to solve pseudomonotone variational inequality problems in real Hilbert spaces. A strong convergence theorem of the algorithm is obtained without the prior information of the Lipschitz constant of the operator and also without any requirement of additional projections. Finally, several computational tests are carried out to demonstrate the reliability and benefits of the algorithm and compare it with the existing ones. Moreover, our algorithm is also applied to solve the variational inequality problem that appears in optimal control problems. The algorithm presented in this paper improves some known results in the literature.'
address:
- 'Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China'
- 'Department of Mathematics, Zhejiang Normal University, Zhejiang, China'
author:
- Bing Tan
- Xiaolong Qin
title: 'Strong convergence of an inertial Tseng’s extragradient algorithm for pseudomonotone variational inequalities with applications to optimal control problems'
---
Pseudomonotone variational inequality ,Tseng’s extragradient method ,inertial method ,viscosity method ,optimal control problem 47H05 ,47H09 ,65K15 ,47J20
Introduction
============
The goal of this study is to investigate a fast iterative method for discovering a solution to the variational inequality problem (in short, VIP). In this paper, one always assumes that $ H $ is a real Hilbert space with $\langle\cdot, \cdot\rangle$ and the induced norm $\|\cdot\|$, and $ C $ is a closed and convex nonempty subset in $ H $. Let us first elaborate on the issues involved in this research as follows: $$\label{VIP}
\text{find } y^{*} \in C \text { such that }\langle \mathcal{A} y^{*}, z-y^{*}\rangle \geq 0,\quad \forall z \in C\,,\tag{VIP}$$ where $ \mathcal{A}: H \rightarrow H $ is a nonlinear mapping. We denote the solution set of as $ \mathrm{VI}(C,\mathcal{A}) $.
Variational inequalities are powerful tools and models in applied mathematics and act an essential role in society, optimization, economics, transportation, mathematical programming, engineering mechanics, and other fields (see, for instance, [@QA; @SYVS; @AIY]). In the last decades, various effective solution methods have been investigated and developed to solve the problems of type ; see, e.g., [@Cho2; @SIna; @tanjnca] and the references therein. It should be pointed out that these approaches usually require that mapping $ \mathcal{A} $ has certain monotonicity. In this paper, we consider that the mapping $ \mathcal{A} $ associated with is pseudomonotone (see the definition below), which is a broader category than monotone mappings.
Let us review some nonlinear mappings in nonlinear analysis for further use. For any elements $ p, q \in H $, one recalls that a mapping $\mathcal{A}: {H} \rightarrow {H}$ is said to be:
1. *$\eta$-strongly monotone* if there is a positive number $ \eta $ such that $$\langle \mathcal{A} p-\mathcal{A} q, p-q\rangle \geq \eta\|p-q\|,$$
2. *$\eta$-inverse strongly monotone* if there is a positive number $ \eta $ such that $$\langle \mathcal{A} p-\mathcal{A} q, p-q\rangle \geq \eta\|\mathcal{A} p-\mathcal{A} q\|^{2},$$
3. *monotone* if $$\langle \mathcal{A} p-\mathcal{A} q, p-q\rangle \geq 0,$$
4. *$\eta$-strongly pseudomonotone* if there is a positive number $ \eta $ such that $$\langle \mathcal{A} p, q-p\rangle \geq 0 \Longrightarrow\langle \mathcal{A} q, q-p\rangle \geq \eta\|p-q\|^{2},$$
5. *pseudomonotone* if $$\langle \mathcal{A} p, q-p\rangle \geq 0 \Longrightarrow\langle \mathcal{A} q, q-p\rangle \geq 0,$$
6. *$ L $-Lipschitz continuous* if there is $ L >0$ such that $$\|\mathcal{A} p-\mathcal{A} q\| \leq L\|p-q\|,$$
7. *sequentially weakly continuous* if for any sequence $ \{p_{n}\} $ weakly converges to a point $ p \in {H} $, $ \left\{\mathcal{A}p_{n}\right\} $ weakly converges to $ \mathcal{A}p $.
It can be easily checked that the following relations: $(1)\Longrightarrow(3) \Longrightarrow(5)$ and $(1)\Longrightarrow(4) \Longrightarrow(5)$. Note that the opposite statement is generally incorrect. Recall that a mapping ${P}_{{C}}: {H} \rightarrow {C}$ is called the metric projection from ${H}$ onto $C$, if for all $x \in H$, there is a unique nearest point in $ C $, which is represented by $P_{C}(x)$, such that $P_{C}x:= \operatorname{argmin}\{\|x-y\|,\, y \in C\}$.
The oldest and simplest projection approach to solve variational inequality problems is the projected-gradient method, which reads as follows: $$\label{PGM}
x_{n+1}=P_{C}\left(x_{n}-\gamma \mathcal{A} x_{n}\right), \quad \forall n \geq 1\,,
\tag{\text{PGM}}$$ where $ P_{C} $ represents the metric projection onto $ C $, mapping $ \mathcal{A}$ is $ L $-Lipschitz continuous and $ \eta $-strongly monotone and the step size $ \gamma \in(0, {2 \eta}/{L^{2}}) $. Then the iterative sequence $ \{x_{n}\} $ defined by converges to the solution of provided that $ \mathrm{VI}(C,\mathcal{A}) $ is nonempty. It should be noted that the iterative sequence $ \{x_{n}\} $ formulated by does not necessarily converge when mapping $ \mathcal{A} $ is “only" monotone. Recently, Malitsky [@PRGM] introduced a projected reflected gradient method, which can be viewed as an improvement of . Indeed, the sequence generated by this method is as follows: $$\label{PRGM}
x_{n+1}=P_{C}\left(x_{n}-\gamma \mathcal{A}\left(2 x_{n}-x_{n-1}\right)\right), \quad \forall n \geq 1\,.
\tag{\text{PRGM}}$$ He proved that the sequence $ \{x_{n}\} $ created by iterative scheme converges to $ u \in \mathrm{VI}(C,\mathcal{A}) $ when the mapping $ \mathcal{A} $ is monotone. Further extensions of can be found in [@PRGM1; @PRGM2].
In many kinds of research on solving variational inequalities controlled by pseudomonotone and Lipschitz continuous operators, the most commonly used algorithm is the extragradient method (see [@EGM]) and its variants. Indeed, Korpelevich proposed the extragradient method (EGM) in [@EGM] to find the solution of the saddle point problem in finite-dimensional spaces. The details of EGM are described as follows: $$\label{EGM}
\left\{\begin{aligned}
&y_{n}=P_{C}\left(x_{n}-\gamma \mathcal{A} x_{n}\right), \\
&x_{n+1}=P_{C}\left(x_{n}-\gamma \mathcal{A} y_{n}\right), \quad \forall n \geq 1\,,
\end{aligned}\right.
\tag{\text{EGM}}$$ where mapping $\mathcal{A}$ is $ L $-Lipschitz continuous monotone and fixed step size $\gamma \in(0, 1/L)$. Under the condition of $ \mathrm{VI}(C, \mathcal{A}) \ne \emptyset$, the iterative sequence $ \{x_{n}\} $ defined by converges to an element of $ \mathrm{VI}(C, \mathcal{A}) $. In the past few decades, EGM has been considered and extended by many authors for solving in infinite-dimensional spaces, see, e.g., [@SILD; @SLD; @tanarxiv] and the references therein. Recently, Vuong [@EGMpVIP] extended EGM to solve pseudomonotone variational inequalities in Hilbert spaces, and proved that the iterative sequence constructed by the algorithm converges weakly to a solution of . On the other hand, it is not easy to calculate the projection on the general closed convex set $ C $, especially when $ C $ has a complex structure. Note that in the extragradient method, two projections need to be calculated on the closed convex set $ C $ for each iteration, which may severely affect the computational performance of the algorithm used.
Next, we introduce two types of methods to enhance the numerical efficiency of EGM. The first approach is the Tseng’s extragradient method (referred to as TEGM, also known as the forward-backward-forward method) proposed by Tseng [@tseng]. The advantage of this method is that the projection on the feasible set only needs to be calculated once in each iteration. More precisely, TEGM is expressed as follows: $$\label{TEGM}
\left\{\begin{aligned}
&y_{n}=P_{C}\left(x_{n}-\gamma \mathcal{A} x_{n}\right)\,, \\
&x_{n+1}=y_{n}-\gamma\left(\mathcal{A} y_{n}-\mathcal{A} x_{n}\right), \quad \forall n \geq 1\,,
\end{aligned}\right.
\tag{\text{TEGM}}$$ where mapping $\mathcal{A}$ is $ L $-Lipschitz continuous monotone and fixed step size $\gamma \in(0, 1/L)$. Then the iterative sequence $ \{x_{n}\} $ formulated by converges to a solution of provided that $ \mathrm{VI}(C, \mathcal{A}) $ is nonempty. Very recently, Bot, Csetnek and Vuong in their recent work [@BotpVIP] proposed a Tseng’s forward-backward-forward algorithm for solving pseudomonotone variational inequalities in Hilbert spaces and performed an asymptotic analysis of the formed trajectories. The second method is the subgradient extragradient method (SEGM) proposed by Censor, Gibali and Reich [@SEGM]. This can be regarded as a modification of EGM. Indeed, they replaced the second projection in by a projection onto a half-space. SEGM is calculated as follows: $$\label{SEGM}
\left\{\begin{aligned}
&y_{n}=P_{C}\left(x_{n}-\gamma \mathcal{A} x_{n}\right)\,, \\
&T_{n}=\left\{x \in H \mid \langle x_{n}-\gamma \mathcal{A} x_{n}-y_{n}, x-y_{n}\rangle \leq 0\right\}\,, \\
&x_{n+1}=P_{T_{n}}\left(x_{n}-\gamma \mathcal{A} y_{n}\right), \quad \forall n \geq 1\,,
\end{aligned}\right.
\tag{\text{SEGM}}$$ where mapping $\mathcal{A}$ is $ L $-Lipschitz continuous monotone and fixed step size $\gamma \in(0, 1/L)$. SEGM not only converges to monotone variational inequalities (see [@CGR1]), but also to pseudomonotone variational inequalities (see [@CGR2; @TSI]).
It is worth mentioning that , and are weakly convergent in infinite-dimensional Hilbert spaces. Some practical problems that occur in the fields of image processing, quantum mechanics, medical imaging and machine learning need to be modeled and analyzed in infinite-dimensional space. Therefore, strong convergence results are preferable to weak convergence results in infinite-dimensional space. Recently, Thong and Vuong [@MaTEGM] introduced the modified Mann-type Tseng’s extragradient method to solve the involving a pseudomonotone mapping in Hilbert spaces. Their method uses an Armijo-like line search to eliminate the reliance on the Lipschitz continuous constant of the mapping $ \mathcal{A} $. Indeed, the proposed algorithm is stated as follows: $$\label{MaTEGM}
\left\{\begin{aligned}
&y_{n}=P_{C}\left(x_{n}-\gamma_{n} \mathcal{A} x_{n}\right)\,,\\
&z_{n}=y_{n}-\gamma_{n}\left(\mathcal{A} y_{n}-\mathcal{A} x_{n}\right)\,,\\
&x_{n+1}=\left(1-\varphi_{n}-\tau_{n}\right) x_{n}+\tau_{n} z_{n}, \quad \forall n \geq 1\,,
\end{aligned}\right.
\tag{\text{MaTEGM}}$$ where the mapping $ \mathcal{A} $ is pseudomonotone, sequentially weakly continuous on $ C $ and uniformly continuous on bounded subsets of $ H $, $\left\{\varphi_{n}\right\}$, $\left\{\tau_{n}\right\}$ are two real positive sequences in $ (0,1) $ such that $\left\{\tau_{n}\right\} \subset \left(a, 1-\varphi_{n}\right)$ for some $a>0$ and $\lim _{n \rightarrow \infty} \varphi_{n}=0$, $ \sum_{n=1}^{\infty} \varphi_{n}=\infty$, $\gamma_{n}:=\alpha \ell^{q_{n}}$ and $q_{n}$ is the smallest non-negative integer $ q $ satisfying $\alpha \ell^{q}\left\|\mathcal{A} x_{n}-\mathcal{A} y_{n}\right\| \leq \phi\left\|x_{n}-y_{n}\right\|$ ($ \alpha>0$, $\ell \in(0,1)$, $\phi \in(0,1) $). They showed that the iteration scheme formed by converges strongly to an element $ u $ under $ \mathrm{VI}(C,\mathcal{A}) \ne \emptyset $, where $ u =\arg \min \{\|z\|: z \in \mathrm{VI}(C,\mathcal{A})\} $.
To accelerate the convergence rate of the algorithms, in 1964, Polyak [@inertial] considered the second-order dynamical system $\ddot{x}(t)+\gamma \dot{x}(t)+\nabla f(x(t))=0$, where $\gamma>0$, $\nabla f$ represents the gradient of $ f $, $ \dot{x}(t) $ and $ \ddot{x}(t) $ denote the first and second derivatives of $ x $ at $ t $, respectively. This dynamic system is called the Heavy Ball with Friction (HBF).
Next, we consider the discretization of this dynamic system (HBF), that is, $$\frac{x_{n+1}-2 x_{n}+x_{n-1}}{h^{2}}+\gamma \frac{x_{n}-x_{n-1}}{h}+\nabla f\left(x_{n}\right)=0, \quad \forall n \geq 0\,.$$ Through a direct calculation, we can get the following form: $$x_{n+1}=x_{n}+\tau\left(x_{n}-x_{n-1}\right)-\varphi \nabla f\left(x_{n}\right), \quad \forall n \geq 0\,,$$ where $\tau=1-\gamma h$ and $\varphi=h^{2}$. This can be considered as the following two-step iteration scheme: $$\left\{\begin{aligned}
&y_{n}=x_{n}+\tau\left(x_{n}-x_{n-1}\right)\,, \\
&x_{n+1}=y_{n}-\varphi \nabla f\left(x_{n}\right), \quad \forall n \geq 0\,.
\end{aligned}\right.$$ This iteration is now called the inertial extrapolation algorithm, the term $\tau \left(x_{n}-x_{n-1}\right)$ is referred to as the extrapolation point. In recent years, inertial technology as an acceleration method has attracted extensive research in the optimization community. Many scholars have built various fast numerical algorithms by employing the inertial technology. These algorithms have shown advantages in theoretical and computational experiments and have been successfully applied to many problems, see, for instance, [@FISTA; @GHjfpta; @zhoucoam] and the references therein.
Very recently, inspired by the inertial method, the SEGM and the viscosity method, Thong, Hieu and Rassias [@ViSEGM] presented a viscosity-type inertial subgradient extragradient algorithm to solve pseudomonotone in Hilbert spaces. The algorithm is of the form: $$\label{ViSEGM}
\left\{\begin{aligned}
&s_{n}=x_{n}+\delta_{n}\left(x_{n}-x_{n-1}\right)\,, \\
&y_{n}=P_{C}\left(s_{n}-\gamma_{n} \mathcal{A} s_{n}\right) \,,\\
&T_{n}=\left\{x \in H \mid \langle s_{n}-\gamma_{n} \mathcal{A} s_{n}-y_{n}, x-y_{n}\rangle \leq 0\right\}\,, \\
&z_{n}=P_{T_{n}}\left(s_{n}-\gamma_{n} \mathcal{A} y_{n}\right)\,,\\
&x_{n+1}=\varphi_{n} f\left(z_{n}\right)+\left(1-\varphi_{n}\right) z_{n}, \quad \forall n\geq 1\,.\\
&\gamma_{n+1}=\left\{\begin{array}{ll}
\min \left\{\frac{\phi\left\|s_{n}-y_{n}\right\|}{\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\|}, \gamma_{n}\right\}, & \text { if } \mathcal{A} s_{n}-\mathcal{A} y_{n} \neq 0; \\
\gamma_{n}, & \text { otherwise},
\end{array}\right.
\end{aligned}\right.
\tag{\text{ViSEGM}}$$ where the mapping $ \mathcal{A} $ is pseudomonotone, $ L $-Lipschitz continuous, sequentially weakly continuous on $ C $, and the inertia parameters $ \delta_{n} $ are updated in the following ways: $$\delta_{n}=\left\{\begin{array}{ll}
\min \bigg\{\dfrac{\epsilon_{n}}{\left\|x_{n}-x_{n-1}\right\|}, \delta\bigg\}, & \text { if } x_{n} \neq x_{n-1}; \\
\delta, & \text { otherwise}.
\end{array}\right.$$ Note that the Algorithm uses a simple step size rule, which is generated through some calculations of previously known information in each iteration. Therefore, it can work well without the prior information of the Lipschitz constant of the mapping $ \mathcal{A} $. They confirmed the strong convergence of under mild assumptions on cost mapping and parameters.
Motivated and stimulated by the above works, we introduce a new inertial Tseng’s extragradient algorithm with a new step size for solving the pseudomonotone in Hilbert spaces. The advantages of our algorithm are: (1) only one projection on the feasible set needs to be calculated in each iteration; (2) do not require to know the prior information of the Lipschitz constant of the cost mapping; (3) the addition of inertial makes it have faster convergence speed. Under mild assumptions, we confirm a strong convergence theorem of the suggested algorithm. Lastly, some computational tests appearing in finite and infinite dimensions are proposed to verify our theoretical results. Furthermore, our algorithm is also designed to solve optimal control problems. Our algorithm improves some existing results [@MaTEGM; @ViSEGM; @THna; @YLNA; @FQOPT2020].
The organizational structure of our paper is built up as follows. Some essential definitions and technical lemmas that need to be used are given in the next section. In Section \[sec3\], we propose an algorithm and analyze its convergence. Some computational tests and applications to verify our theoretical results are presented in Section \[sec4\]. Finally, the paper ends with a brief summary.
Preliminaries {#sec2}
=============
Let $ C $ be a closed and convex nonempty subset of a real Hilbert space $ H $. The weak convergence and strong convergence of $\left\{x_{n}\right\}_{n=1}^{\infty}$ to $x$ are represented by $x_{n} \rightharpoonup x$ and $x_{n} \rightarrow x$, respectively. For each $x, y \in H$ and $\delta \in \mathbb{R}$, we have the following facts:
1. $\|x+y\|^{2} \leq\|x\|^{2}+2\langle y, x+y\rangle$;
2. $\|\delta x+(1-\delta) y\|^{2}=\delta\|x\|^{2}+(1-\delta)\|y\|^{2}-\delta(1-\delta)\|x-y\|^{2}$.
It is known that $P_{C} x$ has the following basic properties:
- $ \langle x-P_{C} x, y-P_{C} x\rangle \leq 0, \, \forall y \in C $;
- $\left\|P_{C} x-P_{C} y\right\|^{2} \leq\langle P_{C} x-P_{C} y, x-y\rangle, \,\forall y \in H$;
- $\left\|x-P_{C}(x)\right\|^{2} \leq\|x-y\|^{2}-\left\|y-P_{C}(x)\right\|^{2}, \, \forall y \in C$.
We give some explicit formulas to calculate projections on special feasible sets.
1. The projection of $ x $ onto a half-space $H_{u, v}=\{x:\langle u, x\rangle \leq v\}$ is given by $$P_{H_{u, v}}(x)=x-\max\{{[\langle u, x\rangle-v]}/{\|u\|^{2}},0\} u\,.$$
2. The projection of $ x $ onto a box $\operatorname{Box}[a, b]=\{x: a \leq x \leq b\}$ is given by $$P_{\mathrm{Box}[a, b]}(x)_{i}=\min \left\{ b_{i},\max \left\{x_{i}, a_{i}\right\}\right\}\,.$$
3. The projection of $ x $ onto a ball $B[p, q]=\{x:\|x-p\| \leq q\}$ is given by $$P_{B[p, q]}(x)=p+\frac{q}{\max \{\|x-p\|, q\}}(x-p)\,.$$
The following lemmas play an important role in our proof.
\[lem21\] Assume that $ C $ is a closed and convex subset of a real Hilbert space $H$. Let operator ${\mathcal{A}}: {C} \rightarrow {H}$ be continuous and pseudomonotone. Then, $x^{*}$ is a solution of if and only if $ \langle \mathcal{A} x, x-x^{*}\rangle \geq 0,\, \forall x \in C $.
\[lem22\] Let $\left\{p_{n}\right\}$ be a positive sequence, $\left\{q_{n}\right\}$ be a sequence of real numbers, and $\left\{\sigma_{n}\right\}$ be a sequence in $ (0,1) $ such that $\sum_{n=1}^{\infty} \sigma_{n}=\infty$. Assume that $$p_{n+1} \leq\left(1-\sigma_{n}\right) p_{n}+\sigma_{n} q_{n}, \quad \forall n \geq 1\,.$$ If $\limsup _{k \rightarrow \infty} q_{n_{k}} \leq 0$ for every subsequence $\left\{p_{n_{k}}\right\}$ of $\left\{p_{n}\right\}$ satisfying $\lim \inf _{k \rightarrow \infty}$ $\left(p_{n_{k}+1}-p_{n_{k}}\right) \geq~0$, then $\lim _{n \rightarrow \infty} p_{n}=0$.
Main results {#sec3}
============
In this section, we present a self adaptive inertial viscosity-type Tseng’s extragradient algorithm, which is based on the inertial method, the viscosity method and the Tseng’s extragradient method. The major benefit of this algorithm is that the step size is automatically updated at each iteration without performing any line search procedure. Moreover, our iterative scheme only needs to calculate the projection once in each iteration. Before starting to state our main result, we assume that our algorithm satisfies the following five assumptions.
1. The feasible set $ C $ is closed, convex and nonempty. \[con1\]
2. The solution set of the is nonempty, that is, $\mathrm{VI}(C,\mathcal{A}) \neq \emptyset$.\[con2\]
3. The mapping $\mathcal{A}: H \rightarrow H$ is pseudomonotone and $ L $-Lipschitz continuous on $H$, and sequentially weakly continuous on $C$. \[con3\]
4. The mapping $f: H \rightarrow H$ is $\rho$-contractive with $\rho \in[0,1)$.\[con4\]
5. The positive sequence $ \{\epsilon_{n}\} $ satisfies $\lim_{n \rightarrow \infty} \frac{\epsilon_{n}}{\varphi_{n}}=0$, where $ \{\varphi_{n}\}\subset (0,1) $ such that $\lim _{n \rightarrow \infty} \varphi_{n}=0$ and $\sum_{n=1}^{\infty} \varphi_{n}=\infty$. \[con5\]
Now, we can state the details of the iterative method. Our algorithm is described as follows.
**Iterative Steps**: Calculate the next iteration point $ x_{n+1} $ as follows: $$\left\{\begin{aligned}
&s_{n}=x_{n}+\delta_{n}\left(x_{n}-x_{n-1}\right) \,,\\
&y_{n}=P_{C}\left(s_{n}-\gamma_{n} \mathcal{A} s_{n}\right) \,,\\
&z_{n}=y_{n}-\gamma_{n}\left(\mathcal{A} y_{n}-\mathcal{A} s_{n}\right)\,,\\
&x_{n+1}=\varphi_{n} f\left(z_{n}\right) + \left(1-\varphi_{n}\right) z_{n}\,.
\end{aligned}\right.$$ where $ \{\delta_{n}\} $ and $ \{\gamma_{n}\} $ are updated by and , respectively. $$\label{alpha}
\delta_{n}=\left\{\begin{array}{ll}
\min \bigg\{\dfrac{\epsilon_{n}}{\|x_{n}-x_{n-1}\|}, \delta\bigg\}, & \text { if } x_{n} \neq x_{n-1}; \\
\delta, & \text { otherwise}.
\end{array}\right.$$ $$\label{lambda}
\gamma_{n+1}=\left\{\begin{array}{ll}
\min \left\{\dfrac{\phi\|s_{n}-y_{n}\|}{\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\|}, \gamma_{n}\right\}, & \text { if } \mathcal{A} s_{n}-\mathcal{A} y_{n} \neq 0; \\
\gamma_{n}, & \text { otherwise}.
\end{array}\right.$$
\[rem31\] It follows from and Assumption \[con5\] that $$\lim _{n \rightarrow \infty} \frac{\delta_{n}}{\varphi_{n}}\left\|x_{n}-x_{n-1}\right\|=0\,.$$ Indeed, we obtain $\delta_{n}\left\|x_{n}-x_{n-1}\right\| \leq \epsilon_{n}, \forall n$, which together with $\lim _{n \rightarrow \infty} \frac{\epsilon_{n}}{\varphi_{n}}=0$ yields $$\lim _{n \rightarrow \infty} \frac{\delta_{n}}{\varphi_{n}}\left\|x_{n}-x_{n-1}\right\| \leq \lim _{n \rightarrow \infty} \frac{\epsilon_{n}}{\varphi_{n}}=0\,.$$
\[lem31\] The sequence $\left\{\gamma_{n}\right\}$ formulated by is nonincreasing and $$\lim _{n \rightarrow \infty} \gamma_{n}=\gamma \geq \min \Big\{\gamma_{1}, \frac{\phi}{L}\Big\}\,.$$
On account of , we have $\gamma_{n+1} \leq \gamma_{n}, \forall n \in \mathbb{N} $. Hence, $\left\{\gamma_{n}\right\}$ is nonincreasing. Moreover, we get that $\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\| \leq L\left\|s_{n}-y_{n}\right\|$ by means of $\mathcal{A}$ is $L$-Lipschitz continuous. Thus, $$\phi \frac{\left\|s_{n}-y_{n}\right\|}{\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\|} \geq \frac{\phi}{L}, \,\,\text { if }\,\, \mathcal{A} s_{n} \neq \mathcal{A} y_{n}\,,$$ which together with implies that $ \gamma_{n} \geq \min \{\gamma_{1}, \frac{\phi}{L}\} $. Therefore, $\lim _{n \rightarrow \infty} \gamma_{n}=\gamma \geq \min \big\{\gamma_{1}, \frac{\phi}{L}\big\}$ since sequence $ \{\gamma_{n}\} $ is lower bounded and nonincreasing.
The following lemmas have a significant part to play in the convergence proof of our algorithm.
\[lem32\] Suppose that Assumptions \[con1\]–\[con3\] hold. Let $\{s_{n}\}$ and $ \{y_{n} \}$ be two sequences formulated by Algorithm \[alg1\]. If there exists a subsequence $\{s_{n_{k}}\}$ convergent weakly to $z \in H$ and $\lim _{k \rightarrow \infty}\|s_{n_{k}}-y_{n_{k}}\|=0$, then $z \in \mathrm{VI}(C, \mathcal{A})$.
From the property of projection and $ y_{n}=P_{C}\left(s_{n}-\gamma_{n} \mathcal{A} s_{n}\right) $, we have $$\langle s_{n_{k}}-\gamma_{n_{k}} \mathcal{A} s_{n_{k}}-y_{n_{k}}, x-y_{n_{k}}\rangle \leq 0, \quad \forall x \in C\,,$$ which can be written as follows $$\frac{1}{\gamma_{n_{k}}}\langle s_{n_{k}}-y_{n_{k}}, x-y_{n_{k}}\rangle \leq\langle \mathcal{A} s_{n_{k}}, x-y_{n_{k}}\rangle, \quad \forall x \in C\,.$$ Through a direct calculation, we get $$\label{aw}
\frac{1}{\gamma_{n_{k}}}\langle s_{n_{k}}-y_{n_{k}}, x-y_{n_{k}}\rangle+\langle \mathcal{A} s_{n_{k}}, y_{n_{k}}-s_{n_{k}}\rangle \leq\langle \mathcal{A} s_{n_{k}}, x-s_{n_{k}}\rangle,\quad \forall x \in C\,.$$ We have that $\{s_{n_{k}}\}$ is bounded since $\{s_{n_{k}}\}$ is convergent weakly to $z \in H$. Then, from the Lipschitz continuity of $\mathcal{A}$ and $\|s_{n_{k}}-y_{n_{k}}\| \rightarrow 0$, we obtain that $\{\mathcal{A} s_{n_{k}}\}$ and $\{y_{n_{k}}\}$ are also bounded. Since $\gamma_{n_{k}} \geq \min \{\gamma_{1}, \frac{\phi}{L}\}$, one concludes from that $$\label{po}
\liminf _{k \rightarrow \infty}\langle \mathcal{A} s_{n_{k}}, x-s_{n_{k}}\rangle \geq 0, \quad \forall x \in C\,.$$ Moreover, one has $$\label{pi}
\begin{aligned}
\langle \mathcal{A} y_{n_{k}}, x-y_{n_{k}}\rangle=&\langle \mathcal{A} y_{n_{k}}-\mathcal{A} s_{n_{k}}, x-s_{n_{k}}\rangle +\langle \mathcal{A} s_{n_{k}}, x-s_{n_{k}}\rangle+\langle \mathcal{A} y_{n_{k}}, s_{n_{k}}-y_{n_{k}}\rangle\,.
\end{aligned}$$ Since $\lim _{k \rightarrow \infty}\|s_{n_{k}}-y_{n_{k}}\|=0$ and $\mathcal{A}$ is Lipschitz continuous, we get $ \lim _{k \rightarrow \infty}\|\mathcal{A} s_{n_{k}}-\mathcal{A} y_{n_{k}}\|=0 $. This together with and yields that $ \liminf _{k \rightarrow \infty}\langle \mathcal{A} y_{n_{k}}, x-y_{n_{k}}\rangle \geq 0 $.
Next, we select a positive number decreasing sequence $\{\zeta_{k}\}$ such that $ \zeta_{k}\to 0 $ as $ k \to \infty $. For any $k$, we represent the smallest positive integer with $N_{k}$ such that $$\label{pp}
\langle \mathcal{A} y_{n_{j}}, x-y_{n_{j}}\rangle+\zeta_{k} \geq 0,\quad \forall j \geq N_{k}\,.$$ It can be easily seen that the sequence $\{N_{k}\}$ is increasing because $\{\zeta_{k}\}$ is decreasing. Moreover, for any $k$, from $\{y_{N_{k}}\} \subset C$, we can assume $\mathcal{A} y_{N_{k}} \neq 0$ (otherwise, $y_{N_{k}}$ is a solution) and set $ u_{N_{k}}={\mathcal{A} y_{N_{k}}}/{\|\mathcal{A} y_{N_{k}}\|^{2}} $. Then, we get $\langle \mathcal{A} y_{N_{k}}, u_{N_{k}}\rangle=1, \forall k$. Now, we can deduce from that $ \langle \mathcal{A} y_{N_{k}}, x+\zeta_{k} u_{N_{k}}-y_{N_{k}}\rangle \geq 0,\forall k $. According to the fact that $\mathcal{A}$ is pseudomonotone on $H$, we can show that $$\langle \mathcal{A}\left(x+\zeta_{k} u_{N_{k}}\right), x+\zeta_{k} u_{N_{k}}-y_{N_{k}}\rangle \geq 0\,,$$ which further yields that $$\label{pu}
\langle \mathcal{A} x, x-y_{N_{k}}\rangle \geq\langle \mathcal{A} x-\mathcal{A}\left(x+\zeta_{k} u_{N_{k}}\right), x+\zeta_{k} u_{N_{k}}-y_{N_{k}}\rangle-\zeta_{k}\langle \mathcal{A} x, u_{N_{k}}\rangle\,.$$ Now, we prove that $\lim _{k \rightarrow \infty} \zeta_{k} u_{N_{k}}=0 $. We get that $y_{N_{k}} \rightharpoonup z$ since $s_{n_{k}} \rightharpoonup z$ and $\lim _{k \rightarrow \infty} \| s_{n_{k}}-$ $y_{n_{k}} \|=0$. From $\{y_{n}\} \subset C$, we have $z \in C $. In view of $\mathcal{A}$ is sequentially weakly continuous on $C$, one has that $\{\mathcal{A} y_{n_{k}}\}$ converges weakly to $\mathcal{A} z $. One assumes that $\mathcal{A} z \ne 0$ (otherwise, $z$ is a solution). According to the fact that norm mapping is sequentially weakly lower semicontinuous, we obtain $ 0<\|\mathcal{A} z\| \leq \liminf _{k \rightarrow \infty}\|\mathcal{A} y_{n_{k}}\| $. Using $\{y_{N_{k}}\} \subset\{y_{n_{k}}\}$ and $\zeta_{k} \rightarrow 0$ as $k \rightarrow \infty$, we have $$0 \leq \limsup _{k \rightarrow \infty}\|\zeta_{k} u_{N_{k}}\|=\limsup _{k \rightarrow \infty}\Big(\frac{\zeta_{k}}{\|\mathcal{A} y_{n_{k}}\|}\Big) \leq \frac{\lim \sup _{k \rightarrow \infty} \zeta_{k}}{\lim \inf _{k \rightarrow \infty}\|\mathcal{A} y_{n_{k}}\|}=0\,.$$ That is, $\lim _{k \rightarrow \infty} \zeta_{k} u_{N_{k}}=0$. Thus, from the facts that $\mathcal{A}$ is Lipschitz continuous, sequences $\{y_{N_{k}}\}$ and $\{u_{N_{k}}\}$ are bounded and $\lim _{k \rightarrow \infty} \zeta_{k} u_{N_{k}}=0 $, we can conclude from that $ \liminf _{k \rightarrow \infty}\langle \mathcal{A} x, x-y_{N_{k}}\rangle \geq 0 $. Therefore, $$\langle \mathcal{A} x, x-z\rangle=\lim _{k \rightarrow \infty}\langle \mathcal{A} x, x-y_{N_{k}}\rangle=\liminf _{k \rightarrow \infty}\langle \mathcal{A} x, x-y_{N_{k}}\rangle \geq 0, \forall x\in C\,.$$ Consequently, we observe that $z \in \mathrm{VI}({C}, {\mathcal{A}})$ by Lemma \[lem21\]. This completes the proof.
If $\mathcal{A}$ is monotone, then $\mathcal{A}$ does not need to satisfy sequential weak continuity, see [@DSC].
\[lem33\] Suppose that Assumptions \[con1\]–\[con3\] hold. Let sequences $\{z_{n}\}$ and $ \{y_{n}\} $ be formulated by Algorithm \[alg1\]. Then, we have $$\|z_{n}-u\|^{2} \leq\|s_{n}-u\|^{2}-\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2},\quad \forall u \in \mathrm{VI}(C, \mathcal{A})\,,$$ and $$\|z_{n}-y_{n}\| \leq \phi \frac{\gamma_{n}}{\gamma_{n+1}}\|s_{n}-y_{n}\|\,.$$
First, using the definition of $\left\{\gamma_{n}\right\}$, one obtains $$\label{q}
\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\| \leq \frac{\phi}{\gamma_{n+1}}\left\|s_{n}-y_{n}\right\|, \quad \forall n\,.$$ Indeed, if $\mathcal{A} s_{n}=\mathcal{A} y_{n}$ then clearly holds. Otherwise, it follows from that $$\gamma_{n+1}=\min \left\{\frac{\phi\left\|s_{n}-y_{n}\right\|}{\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\|}, \gamma_{n}\right\} \leq \frac{\phi\left\|s_{n}-y_{n}\right\|}{\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\|}\,.$$ Consequently, we have $$\left\|\mathcal{A} s_{n}-\mathcal{A} y_{n}\right\| \leq \frac{\phi}{\gamma_{n+1}}\left\|s_{n}-y_{n}\right\|\,.$$ Therefore, inequality holds when $\mathcal{A} s_{n}=\mathcal{A} y_{n}$ and $\mathcal{A} s_{n} \neq \mathcal{A} y_{n}$. From the definition of $ z_{n} $, one sees that $$\label{a}
\begin{aligned}
\|z_{n}-u\|^{2}=&\|y_{n}-\gamma_{n}\left(\mathcal{A} y_{n}-\mathcal{A} s_{n}\right)-u\|^{2} \\
=&\|y_{n}-u\|^{2}+\gamma_{n}^{2}\|\mathcal{A} y_{n}-\mathcal{A} s_{n}\|^{2}-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}-\mathcal{A} s_{n}\rangle \\
=&\|s_{n}-u\|^{2}+\|y_{n}-s_{n}\|^{2}+2\langle y_{n}-s_{n}, s_{n}-u\rangle \\
&+\gamma_{n}^{2}\|\mathcal{A} y_{n}-\mathcal{A} s_{n}\|^{2}-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}-\mathcal{A} s_{n}\rangle \\
=&\|s_{n}-u\|^{2}+\|y_{n}-s_{n}\|^{2}-2\langle y_{n}-s_{n}, y_{n}-s_{n}\rangle+2\langle y_{n}-s_{n}, y_{n}-u\rangle \\
&+\gamma_{n}^{2}\|\mathcal{A} y_{n}-\mathcal{A} s_{n}\|^{2}-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}-\mathcal{A} s_{n}\rangle \\
=&\|s_{n}-u\|^{2}-\|y_{n}-s_{n}\|^{2}+2\langle y_{n}-s_{n}, y_{n}-u\rangle \\
&+\gamma_{n}^{2}\|\mathcal{A} y_{n}-\mathcal{A} s_{n}\|^{2}-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}-\mathcal{A} s_{n}\rangle\,.
\end{aligned}$$ Since $y_{n}=P_{C}\left(s_{n}-\gamma_{n} \mathcal{A} s_{n}\right)$, using the property of projection, we obtain $$\langle y_{n}-s_{n}+\gamma_{n} \mathcal{A} s_{n}, y_{n}-u\rangle \leq 0\,,$$ or equivalently $$\label{z}
\langle y_{n}-s_{n}, y_{n}-u\rangle \leq-\gamma_{n}\langle \mathcal{A} s_{n}, y_{n}-u\rangle\,.$$ From , and , we have $$\label{w}
\begin{aligned}
\|z_{n}-u\|^{2} \leq&\|s_{n}-u\|^{2}-\|y_{n}-s_{n}\|^{2}-2 \gamma_{n}\langle \mathcal{A} s_{n}, y_{n}-u\rangle+\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\|s_{n}-y_{n}\|^{2} \\
&-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}-\mathcal{A} s_{n}\rangle \\
\leq&\|s_{n}-u\|^{2}-\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}-2 \gamma_{n}\langle y_{n}-u, \mathcal{A} y_{n}\rangle\,.
\end{aligned}$$ From $u \in \mathrm{VI}(C, \mathcal{A})$, one has $\langle \mathcal{A} u, y_{n}-u\rangle \geq 0$. Using the pseudomonotonicity of $\mathcal{A}$, we get $$\label{s}
\langle \mathcal{A} y_{n}, y_{n}-u\rangle \geq 0\,.$$ Combining and , we can show that $$\|z_{n}-u\|^{2} \leq\|s_{n}-u\|^{2}-\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}\,.$$ According to the definition of $ z_{n} $ and , we obtain $$\|z_{n}-y_{n}\| \leq \phi \frac{\gamma_{n}}{\gamma_{n+1}}\|s_{n}-y_{n}\|\,.$$ This completes the proof of the Lemma \[lem33\].
\[thm41\] Suppose that Assumptions \[con1\]–\[con5\] hold. Then the iterative sequence $\{x_{n}\}$ formulated by Algorithm \[alg1\] converges to $u \in \mathrm{VI}(C, \mathcal{A})$ in norm, where $u = P_{\mathrm{VI}(C,\mathcal{A})} \circ f(u)$.
**Claim 1.** The sequence $\{x_{n}\}$ is bounded. According to Lemma \[lem33\], we get that $\lim _{n \rightarrow \infty}\big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\big)=1-\phi^{2}>0$. Therefore, there is a constant $n_{0} \in \mathbb{N}$ that satisfies $
1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}>0, \forall n \geq n_{0}\,.
$ From Lemma \[lem33\], one has $$\label{py}
\|z_{n}-u\| \leq\|s_{n}-u\|, \quad \forall n \geq n_{0}\,.$$ By the definition of $s_{n}$, one sees that $$\label{pl}
\begin{aligned}
\|s_{n}-u\| &=\|x_{n}+\delta_{n}\left(x_{n}-x_{n-1}\right)-u\| \\
& \leq\|x_{n}-u\|+\delta_{n}\|x_{n}-x_{n-1}\| \\
&=\|x_{n}-u\|+\varphi_{n} \cdot \frac{\delta_{n}}{\varphi_{n}}\|x_{n}-x_{n-1}\|\,.
\end{aligned}$$ From Remark \[rem31\], one gets $\frac{\delta_{n}}{\varphi_{n}}\|x_{n}-x_{n-1}\| \rightarrow 0$. Thus, there is a constant $Q_{1}>0$ that satisfies $$\label{ppl}
\frac{\delta_{n}}{\varphi_{n}}\|x_{n}-x_{n-1}\| \leq Q_{1},\quad \forall n \geq 1\,.$$ Using , and , we obtain $$\label{pppl}
\|z_{n}-u\| \leq\|s_{n}-u\| \leq\|x_{n}-u\|+\varphi_{n} Q_{1},\quad \forall n \geq n_{0}\,.$$ Using the definition of $\{x_{n+1}\}$ and , we have $$\begin{aligned}
\|x_{n+1}-u\| &=\|\varphi_{n} f\left(z_{n}\right)+\left(1-\varphi_{n}\right) z_{n}-u\| \\
& \leq \varphi_{n}\|f\left(z_{n}\right)-f(u)\|+\varphi_{n}\|f(u)-u\|+\left(1-\varphi_{n}\right)\|z_{n}-u\| \\
& \leq \varphi_{n} \rho\|z_{n}-u\|+\varphi_{n}\|f(u)-u\|+\left(1-\varphi_{n}\right)\|z_{n}-u\| \\
&=\left(1-(1-\rho) \varphi_{n}\right)\|z_{n}-u\|+\varphi_{n}\|f(u)-u\|\\
& \leq\left(1-(1-\rho) \varphi_{n}\right)\|x_{n}-u\|+\varphi_{n} Q_{1}+\varphi_{n}\|f(u)-u\| \\
&=\left(1-(1-\rho) \varphi_{n}\right)\|x_{n}-u\|+(1-\rho) \varphi_{n} \frac{Q_{1}+\|f(u)-u\|}{1-\rho} \\
& \leq \max \Big\{\|x_{n}-u\|, \frac{Q_{1}+\|f(u)-u\|}{1-\rho}\Big\} \\
& \leq \cdots \leq \max \Big\{\|x_{n_{0}}-u\|, \frac{Q_{1}+\|f(u)-u\|}{1-\rho}\Big\},\,\, \forall n \geq n_{0}\,.
\end{aligned}$$ That is, $\{x_{n}\}$ is bounded. We have that $ \{s_{n}\} $, $\{z_{n}\}$ and $\{f\left(z_{n}\right)\}$ are also bounded.
**Claim 2.** $$\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}\leq\|x_{n}-u\|^{2}-\|x_{n+1}-u\|^{2}+\varphi_{n} Q_{4}$$ for some $Q_{4}>0$. Indeed, it follows from that $$\label{lk}
\begin{aligned}
\|s_{n}-u\|^{2} & \leq(\|x_{n}-u\|+\varphi_{n} Q_{1})^{2} \\
&=\|x_{n}-u\|^{2}+\varphi_{n}(2 Q_{1}\|x_{n}-u\|+\varphi_{n} Q_{1}^{2}) \\
& \leq\|x_{n}-u\|^{2}+\varphi_{n} Q_{2}
\end{aligned}$$ for some $Q_{2}>0$. Combining Lemma \[lem33\] and , we see that $$\label{plm}
\begin{aligned}
\|x_{n+1}-u\|^{2} & \leq \varphi_{n}\|f\left(z_{n}\right)-u\|^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2} \\
& \leq \varphi_{n}(\|f\left(z_{n}\right)-f(u)\|+\|f(u)-u\|)^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2} \\
&\leq \varphi_{n}(\|z_{n}-u\|+\|f(u)-u\|)^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2} \\
&=\varphi_{n}\|z_{n}-u\|^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2}\\
&\quad+\varphi_{n}(\|f(u)-u\|^{2}+2\|z_{n}-u\| \cdot\|f(u)-u\|) \\
&\leq\|z_{n}-u\|^{2}+\varphi_{n} Q_{3}\\
&\leq \|s_{n}-u\|^{2}-\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}+\varphi_{n} Q_{3}\\
&\leq \|x_{n}-u\|^{2}-\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}+\varphi_{n} Q_{4}
\end{aligned}$$ where $Q_{4}:=Q_{2}+Q_{3}$. Therefore, we obtain $$\Big(1-\phi^{2} \frac{\gamma_{n}^{2}}{\gamma_{n+1}^{2}}\Big)\|s_{n}-y_{n}\|^{2}\leq\|x_{n}-u\|^{2}-\|x_{n+1}-u\|^{2}+\varphi_{n} Q_{4}\,.$$
**Claim 3.** $$\begin{aligned}
\|x_{n+1}-u\|^{2} \leq& \left(1-(1-\rho) \varphi_{n}\right)\|x_{n}-u\|^{2}+(1-\rho) \varphi_{n}\cdot\Big[\frac{3 Q}{1-\rho} \cdot \frac{\delta_{n}}{\varphi_{n}}\|x_{n}-x_{n-1}\|\Big. \\
&+\Big.\frac{2}{1-\rho}\langle f(u)-u, x_{n+1}-u\rangle\Big],\quad \forall n \geq n_{0}\,.
\end{aligned}$$ for some $Q>0$. Using the definition of $ s_{n} $, we can show that $$\label{pm}
\begin{aligned}
\|s_{n}-u\|^{2}
&=\|x_{n}+\delta_{n}\left(x_{n}-x_{n-1}\right)-u\|^{2} \\
&\leq\|x_{n}-u\|^{2}+2 \delta_{n}\|x_{n}-u\|\|x_{n}-x_{n-1}\|+\delta_{n}^{2}\|x_{n}-x_{n-1}\|^{2}\\
&\leq\|x_{n}-u\|^{2}+3Q\delta_{n}\|x_{n}-x_{n-1}\|\,,
\end{aligned}$$ where $Q:=\sup _{n \in \mathbb{N}}\{\|x_{n}-u\|, \delta\|x_{n}-x_{n-1}\|\}>0$. Using and , we get $$\begin{aligned}
&\quad \|x_{n+1}-u\|^{2} =\|\varphi_{n} f\left(z_{n}\right)+\left(1-\varphi_{n}\right) z_{n}-u\|^{2} \\
&=\|\varphi_{n}(f\left(z_{n}\right)-f(u))+\left(1-\varphi_{n}\right)(z_{n}-u)+\varphi_{n}(f(u)-u)\|^{2} \\
& \leq\|\varphi_{n}(f\left(z_{n}\right)-f(u))+\left(1-\varphi_{n}\right)(z_{n}-u)\|^{2}+2 \varphi_{n}\langle f(u)-u, x_{n+1}-u\rangle \\
& \leq \varphi_{n}\|f\left(z_{n}\right)-f(u)\|^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2}+2 \varphi_{n}\langle f(u)-u, x_{n+1}-u\rangle \\
& \leq \varphi_{n} \rho^{2}\|z_{n}-u\|^{2}+\left(1-\varphi_{n}\right)\|z_{n}-u\|^{2}+2 \varphi_{n}\langle f(u)-u, x_{n+1}-u\rangle \\
&\leq\left(1-(1-\rho) \varphi_{n}\right)\|z_{n}-u\|^{2}+2 \varphi_{n}\langle f(u)-u, x_{n+1}-u\rangle \\
&\leq \left(1-(1-\rho) \varphi_{n}\right)\|x_{n}-u\|^{2}+(1-\rho) \varphi_{n}\cdot\Big[\frac{3 Q}{1-\rho} \cdot \frac{\delta_{n}}{\varphi_{n}}\|x_{n}-x_{n-1}\|\Big. \\
&\quad+ \Big.\frac{2}{1-\rho}\langle f(u)-u, x_{n+1}-u\rangle\Big],\quad \forall n \geq n_{0}\,.
\end{aligned}$$
**Claim 4.** $\{\|x_{n}-u\|^{2}\}$ converges to zero. From Lemma \[lem22\] and Remark \[rem31\], it remains to show that $\lim \sup _{k \rightarrow \infty}\langle f(u)-u, x_{n_{k}+1}-u\rangle \leq 0$ for any subsequence $\{\|x_{n_{k}}-u\|\}$ of $\{\|x_{n}-u\|\}$ satisfies $ \liminf _{k \rightarrow \infty}\big(\|x_{n_{k}+1}-u\|-\|x_{n_{k}}-u\|\big) \geq 0 $.
For this purpose, we assume that $\{\|x_{n_{k}}-u\|\}$ is a subsequence of $\{\|x_{n}-u\|\}$ such that $$\liminf _{k \rightarrow \infty}\left(\|x_{n_{k}+1}-u\|-\|x_{n_{k}}-u\|\right) \geq 0\,.$$ Then, $$\begin{aligned}
&\quad\lim _{k \rightarrow \infty} \inf \big(\|x_{n_{k}+1}-u\|^{2}-\|x_{n_{k}}-u\|^{2}\big) \\
&=\liminf _{k \rightarrow \infty}\big[(\|x_{n_{k}+1}-u\|-\|x_{n_{k}}-u\|)(\|x_{n_{k}+1}-u\|+\|x_{n_{k}}-u\|)\big] \geq 0\,.
\end{aligned}$$ It follows from Claim 2 and Assumption \[con5\] that $$\begin{aligned}
&\quad \limsup _{k \rightarrow \infty}\big(1-\phi^{2} \frac{\gamma_{n_{k}}^{2}}{\gamma_{n_{k}+1}^{2}}\big)\|s_{n_{k}}-y_{n_{k}}\|^{2} \\
& \leq \limsup _{k \rightarrow \infty}\big[\|x_{n_{k}}-u\|^{2}-\|x_{n_{k}+1}-u\|^{2}\big]+\limsup _{k \rightarrow \infty} \varphi_{n_{k}} Q_{4} \\
&=-\liminf _{k \rightarrow \infty}\big[\|x_{n_{k}+1}-u\|^{2}-\|x_{n_{k}}-u\|^{2}\big] \\
& \leq 0\,,
\end{aligned}$$ which yields that $ \lim _{k \rightarrow \infty}\|s_{n_{k}}-y_{n_{k}}\|=0$. From Lemma \[lem33\], we obtain $\lim _{k \rightarrow \infty}\|z_{n_{k}}-y_{n_{k}}\|=0 $. Hence, $\lim _{k \rightarrow \infty}\|z_{n_{k}}-s_{n_{k}}\|=0$.
Moreover, using Remark \[rem31\] and Assumption \[con5\], we have $$\|x_{n_{k}}-s_{n_{k}}\|=\delta_{n_{k}}\|x_{n_{k}}-x_{n_{k}-1}\|=\varphi_{n_{k}} \cdot \frac{\delta_{n_{k}}}{\varphi_{n_{k}}}\|x_{n_{k}}-x_{n_{k}-1}\| \rightarrow 0\,,$$ and $$\|x_{n_{k}+1}-z_{n_{k}}\|=\varphi_{n_{k}}\|z_{n_{k}}-f\left(z_{n_{k}}\right)\| \rightarrow 0\,.$$ Therefore, we conclude that $$\label{pj}
\|x_{n_{k}+1}-x_{n_{k}}\| \leq\|x_{n_{k}+1}-z_{n_{k}}\|+\|z_{n_{k}}-s_{n_{k}}\|+\|s_{n_{k}}-x_{n_{k}}\| \rightarrow 0\,.$$ Since $\{x_{n_{k}}\}$ is bounded, one asserts that there is a subsequence $\{x_{n_{k_{j}}}\}$ of $\{x_{n_{k}}\}$ that satisfies $ x_{n_{k_{j}}}\rightharpoonup q$. Furthermore, $$\label{pk}
\limsup _{k \rightarrow \infty}\langle f(u)-u, x_{n_{k}}-u\rangle=\lim _{j \rightarrow \infty}\langle f(u)-u, x_{n_{k_{j}}}-u\rangle=\langle f(u)-u, q-u\rangle\,.$$ We get $s_{n_{k}} \rightharpoonup q$ since $ \|x_{n_{k}}-s_{n_{k}}\|\rightarrow 0 $. This together with $ \lim _{k \rightarrow \infty}\|s_{n_{k}}-y_{n_{k}}\|=0$ and Lemma \[lem32\] obtains $q \in \mathrm{VI}(C, \mathcal{A}) $. By the definition of $ u = P_{\mathrm{VI}(C,\mathcal{A})} \circ f(u) $ and , we infer that $$\label{pn}
\limsup _{k \rightarrow \infty}\langle f(u)-u, x_{n_{k}}-u\rangle=\langle f(u)-u, q-u\rangle \leq 0\,.$$ Combining and , we see that $$\label{pb}
\begin{aligned}
\limsup _{k \rightarrow \infty}\langle f(u)-u, x_{n_{k}+1}-u\rangle & \leq \limsup _{k \rightarrow \infty}\langle f(u)-u, x_{n_{k}}-u\rangle \leq 0\,.
\end{aligned}$$ Thus, from Remark \[rem31\], , Claim 3 and Lemma \[lem22\], we conclude that $ x_{n}\rightarrow u $. The proof of the Theorem \[thm41\] is now complete.
If inertial parameter $ \delta_{n}=0 $ in Algorithm \[alg1\], we have the following result.
Assume that mapping $\mathcal{A}: H \rightarrow H$ is $ L $-Lipschitz continuous pseudomonotone on $H$ and sequentially weakly continuous on $C$. Let mapping $f: H \rightarrow H$ be $\rho$-contractive with $\rho \in[0,1)$. Given $\gamma_{0}>0$, $ \{\varphi_{n}\}\subset (0,1) $ satisfies $\lim _{n \rightarrow \infty} \varphi_{n}=0$ and $\sum_{n=1}^{\infty} \varphi_{n}=\infty$. Let $x_{0}$ be the initial point and $ \{x_{n}\} $ be the sequence generated by $$\label{coro1}
\left\{\begin{aligned}
&y_{n}=P_{C}\left(x_{n}-\gamma_{n} \mathcal{A} x_{n}\right) \,,\\
&z_{n}=y_{n}-\gamma_{n}\left(\mathcal{A} y_{n}-\mathcal{A} x_{n}\right)\,,\\
&x_{n+1}=\varphi_{n} f\left(z_{n}\right) + \left(1-\varphi_{n}\right) z_{n}\,,
\end{aligned}\right.$$ where step size $ \{\gamma_{n}\} $ is updated through . Then the iterative sequence $\{x_{n}\}$ formulated by Algorithm converges to $u \in \mathrm{VI}(C, \mathcal{A})$ in norm, where $u = P_{\mathrm{VI}(C,\mathcal{A})} \circ f(u)$.
It should be pointed out that Algorithm improves and summarizes [@THna Algorithm 3] and [@YLNA Algorithm 1]. Moreover, our algorithm is to solve pseudomonotone , while [@THna] and [@YLNA] are to solve monotone . We know that the classes of pseudomonotone mappings cover the classes of monotone mappings. Therefore, our algorithm is more applicable.
Numerical examples {#sec4}
==================
In this section, we give some computational tests and applications to show the numerical behavior of our algorithm, and also to compare it with some strong convergent algorithms (Algorithms and ). It should be emphasized that all algorithms can work without the prior information of the Lipschitz constant of the mapping. We use the FOM Solver [@FOM] to effectively calculate the projections onto $ C $ and $ T_{n} $. All the programs are implemented in MATLAB 2018a on a personal computer. The parameters are chosen as follows:
- $ \phi=0.8 $, $ \gamma_{1}=1 $, $ \delta=0.3 $, $ \epsilon_{n}=1/(n+1)^2 $, $ \varphi_{n}=1/(n+1) $, $ f(x)=0.9x $ for the proposed Algorithm \[alg1\] and the Algorithm ;
- $ \alpha=\ell=0.5 $, $ \phi=0.4 $, $ \varphi_{n}=1/(n+1) $, $ \tau_{n}=0.5(1-\varphi_{n}) $ for the Algorithm .
In our numerical examples, when the number of iterations is the same, we use the runtime in seconds to measure the computational performance of all algorithms. In the situation, if the solution $x^{*}$ of our problem is known, we take $E(x)=\left\|x-x^{*}\right\|$ to represent the behavior of all algorithms. Otherwise, according to the feature of solutions to , we use the sequences $ D_{n}=\|x_{n}-x_{n-1}\| $ and $ E_{n}=\|s_{n}-P_{C}(s_{n}-\gamma_{n}s_{n})\| $ to study the performance of all algorithms. Note that, if $\left\|E_{n}\right\|\rightarrow 0$, then $x_{n}$ can be regards as an approximate solution of .
\[ex1\] Let ${\mathcal{A}}: R^{m} \rightarrow R^{m}\, (m=5,10,15,20)$ be an operator given by $${{\mathcal{A}}}(x)=\frac{1}{\|x\|^{2}+1} \operatorname{argmin}_{y \in R^{m}} \Big\{\frac{\|y\|^{4}}{4}+\frac{1}{2}\|x-y\|^{2}\Big\}\,.$$ We emphasize that the operator ${\mathcal{A}}$ is not monotone. However, the operator $\mathcal{A}$ is Lipschitz continuous pseudomonotone (see [@HCXK]). In this example, we choose the feasible set is a box constraint $ C=[-5,5]^{m} $. Take initial values $ x_{0} = x_{1} $ are randomly generated by *rand(m,1)* in MATLAB. The maximum iteration $ 50 $ as a common stopping criterion. For the four different dimensions of the operator $ \mathcal{A} $, the numerical results are presented in Figs. \[ex1\_data5\]–\[ex1\_data20\].
\[ex2\] In the second example, we consider the form of linear operator $ \mathcal{A}: R^{m}\rightarrow R^{m} $ ($ m=5,10,15,20 $) as follows: $\mathcal{A}(x)=Gx+g$, where $g\in R^{m}$ and $G=BB^{\mathsf{T}}+M+E$, matrix $B\in R^{m\times m}$, matrix $M\in R^{m\times m}$ is skew-symmetric, and matrix $E\in R^{m\times m}$ is diagonal matrix whose diagonal terms are non-negative (hence $ G $ is positive symmetric definite). We choose the feasible set as $C=\left\{x \in {R}^{m}:-2 \leq x_{i} \leq 5, \, i=1, \ldots, m\right\}$. We get that mapping $\mathcal{A}$ is strongly pseudomonotone and Lipschitz continuous. In this numerical example, both $B, M$ entries are randomly created in $[-2,2]$, $E$ is generated randomly in $[0,2]$ and $ g = \mathbf{0} $. It can be easily seen that the solution to the problem is $ x^{*}=\{\mathbf{0}\} $. The maximum iteration $ 1000 $ as a common stopping criterion and the initial values $ x_{0} = x_{1} $ are randomly generated by *rand(m,1)* in MATLAB. The numerical results with elapsed time are described in Fig. \[ex2\_res\].
\[ex3\] Finally, we focus on a case in Hilbert space $H=L^{2}[0,1]$ with inner product $$\langle x, y\rangle=\int_{0}^{1} x(t) y(t) \mathrm{d} t\,,$$ and norm $$\|x\|=(\int_{0}^{1} x(t)^{2} \mathrm{d} t)^{1 / 2}\,.$$ Let $b$ and $B$ be two positive numbers such that $B/(m+1)<b/m<b<B$ for some $m>1$. We select the feasible set as $C=\{x \in H:\|x\| \leq b\}$. The operator $\mathcal{A}: H \rightarrow H$ is of the form $$\mathcal{A}(x)=(B-\|x\|) x, \quad \forall x \in H\,.$$ It should be pointed out that operator $\mathcal{A}$ is not monotone. Indeed, take a particular pair $(x^{\ddagger}, mx^{\ddagger})$, we pick $x^{\ddagger} \in C$ to satisfy $B /(m+1)<\|x^{\ddagger}\|<b / m$, one can sees that $m\|x^{\ddagger}\| \in C $. By a simple operation, we get $$\langle \mathcal{A}(x^{\ddagger})-\mathcal{A}(y^{\ddagger}), x^{\ddagger}-y^{\ddagger}\rangle=(1-m)^{2}\|x^{\ddagger}\|^{2}(B-(1+m)\|x^{\ddagger}\|)<0\,.$$ Hence, the operator $\mathcal{A}$ is not monotone on $C$. Next, we show that $\mathcal{A}$ is pseudomonotone. Indeed, one assumes that $\langle \mathcal{A}(x), y-x\rangle \geq 0, \forall x, y \in C$, that is, $\langle(B-\|x\|) x, y-x\rangle \geq 0$. From $\|x\|<B $, we get that $\langle x, y-x\rangle \geq 0$. Therefore, we can show that $$\begin{aligned}
\langle \mathcal{A}(y), y-x\rangle &=\langle(B-\|y\|) y, y-x\rangle \\
& \geq(B-\|y\|)(\langle y, y-x\rangle-\langle x, y-x\rangle) \\
&=(B-\|y\|)\|y-x\|^{2} \geq 0\,.
\end{aligned}$$ For the experiment, we take $B=1.5$, $b=1$, $m=1.1$. We know that the solution to the problem is $x^{*}(t)=0 $. The maximum iteration $ 50 $ as the stopping criterion. Fig. \[ex3\_res\] shows the behaviors of function $E_{n}=\left\|x_{n}(t)-x^{*}(t)\right\|$ formulated by all algorithms with four initial points $ x_{0}(t)=x_{1}(t) $ (Case I: $x_{1}(t)=t^2$, Case II: $x_{1}(t)=\cos(t)$, Case III: $x_{1}(t)=\sin(2t)$ and Case IV: $x_{1}(t)=2^{t}$).
1. From Figs.\[ex1\_data5\]–\[ex3\_res\], we can see that our proposed algorithm converges quickly and has better computational performance than the existing algorithms. In addition, these results are independent of the selection of initial values and the size of dimensions. Therefore, our algorithm is robust.
2. It should be emphasized that Algorithm needs to spend more running time to achieve the same error accuracy because it uses Armoji-type rules to automatically update the step size, and this update criterion requires to calculate the value of operator $ A $ many times in each iteration. However, our proposed Algorithm \[alg1\] uses previously known information to update the step size by a simple calculation in each iteration, which makes it converge faster.
3. It is noted that operator $ \mathcal{A} $ is pseudomonotone or strongly pseudomonotone in our numerical experiments. At this point, algorithms [@THna; @YLNA; @FQOPT2020] for solving monotone will not be available. Therefore, our proposed algorithm is more applicable for practical applications.
Next, we use our proposed Algorithm \[alg1\] to solve the that appears in optimal control problems. Recently, many scholars have proposed different methods to solve it. We recommend readers to refer to [@PV; @VS2019; @HSM2020] for the algorithms and detailed description of the problem.
\[ex41\] $$\begin{aligned}
\text{minimize} \;\;\; &x_{2}(3 \pi)\\
\text{subject to} \;\;\; & \dot{x}_{1}(t)=x_{2}(t)\,, \\
\;\;\;& \dot{x}_{2}(t) =-x_{1}(t)+u(t), \;\;\forall t \in[0,3 \pi]\,, \\
\;\;\;& x(0) =0\,, \\
\;\;\;& u(t) \in[-1,1]\,.
\end{aligned}$$ The exact optimal control of Example \[ex41\] is known: $$u^{*}(t)=\left\{\begin{aligned}
1, \quad & \text { if }\; t \in[0, \pi / 2) \cup(3 \pi / 2,5 \pi / 2)\,; \\
-1, \quad & \text { if }\; t \in(\pi / 2,3 \pi / 2) \cup(5 \pi / 2,3 \pi]\,.
\end{aligned}\right.$$ Our parameters are set as follows: $$N=100, \phi=0.1, \gamma_{1}=0.4, \delta=0.3, \epsilon_{n}=\frac{10^{-4}}{(n+1)^2}, \varphi_{n}=\frac{10^{-4}}{n+1}, f(x)=0.1x\,.$$ The initial controls $ u_{0}(t)=u_{1}(t) $ are randomly generated in $ [-1,1] $, and the stopping criterion is $\left\|u_{n+1}-u_{n}\right\| \leq 10^{-4} $ or maximum iteration $ 1000 $ times. After $ 122 $ iterations, Algorithm \[alg1\] took $ 0.059839 $ seconds to reach the required error accuracy. Fig. \[ex3\_fig\] shows the approximate optimal control and the corresponding trajectories of Algorithm \[alg1\].
We now consider an example in which the terminal function is not linear.
\[ex42\] $$\begin{aligned}
\text{minimize} \;\;\; & -x_{1}(2)+\left(x_{2}(2)\right)^{2}\,, \\
\text{subject to} \;\;\; & \dot{x}_{1}(t)=x_{2}(t)\,, \\
\;\;\; & \dot{x}_{2}(t)=u(t), \;\;\forall t \in[0,2]\,, \\
\;\;\; & x_{1}(0)=0, \;\;x_{2}(0)=0\,, \\
\;\;\; & u(t) \in[-1,1]\,.
\end{aligned}$$ The exact optimal control of Example \[ex42\] is $$u^{*}(t)=\left\{\begin{aligned}
1 \quad& \text { if }\; t \in[0,1.2)\,; \\
-1 \quad& \text { if }\; t \in(1.2,2]\,.
\end{aligned}\right.$$ In this example, the parameters of our algorithm are set the same as in Example \[ex41\]. After the maximum allowable iteration of $1000$ times, Algorithm \[alg1\] took $0.39932$ seconds, but the required error accuracy was not achieved. Reaching the allowable error range may require more iterations. The approximate optimal control and the corresponding trajectories of Algorithm \[alg1\] are plotted in Fig. \[ex4\_fig\].
As can be seen from Examples \[ex41\] and \[ex42\], the algorithm proposed in this paper can work well on optimal control problems. It should be pointed out that our proposed algorithm can work better when the terminal function is linear rather than nonlinear (cf. Figs. \[ex3\_fig\] and \[ex4\_fig\]).
The conclusion {#sec5}
==============
In this paper, based on the inertial method, the Tseng’s extragradient method and the viscosity method, we introduced a new extragradient algorithm to solve the pseudomonotone variational inequality in a Hilbert space. The main benefit of the suggested method is that only one projection needs to be calculated in each iteration. The convergence of the algorithm was proved without the prior information of the Lipschitz constant of the mapping. Moreover, our algorithm adds an inertial term, which greatly improves the convergence speed of the algorithm. Our numerical experiments showed that the proposed algorithm improves some results of the existing algorithms in the literature. As an application, the variational inequality problem in the optimal control problem was also studied.
[99]{} Cuong, T.H., Yao, J.C., Yen, N.D.: Qualitative properties of the minimum sum-of-squares clustering problem. Optimization, (2020), DOI: 10.1080/02331934.2020.1778685.
Khan, A.A., Sama, M.: Optimal control of multivalued quasi variational inequalities. Nonlinear Anal. **75**, 1419–1428 (2012)
Sahu, D.R., Yao, J.C., Verma, M., Shukla, K.K.: Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization (2020). DOI: 10.1080/02331934.2019.1702040
Cho, S.Y., Li, W., Kang, S.M.: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. **2013**, 199 (2013).
Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algorithms **76**, 259–282 (2017)
Tan, B., Xu, S., Li, S.: Inertial shrinking projection algorithms for solving hierarchical variational inequality problems. J. Nonlinear Convex Anal. **21**, 871–884 (2020)
Malitsky, Y.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. **25**, 502–520 (2015)
Malitsky, Y.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. **33**, 140–164 (2018)
Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. (2019). DOI:10.1007/s10107-019-01416-w
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika i Mat. Metody. **12**, 747–756 (1976)
Shehu, Y., Iyiola, O.S., Li, X.H., Dong, Q.-L.: Convergence analysis of projection method for variational inequalities. Comput. Appl. Math. [**38**]{}, 161 (2019)
Shehu,Y., Li, X.H., Dong, Q.-L.: An efficient projection-type method for monotone variational inequalities in Hilbert spaces. Numer. Algorithms **84**, 365–388 (2020)
Tan, B., Fan, J., Li, S.: Self adaptive inertial extragradient algorithms for solving variational inequality problems. arXiv preprint. arXiv: 2006.04287 (2020)
Vuong, P.T.: On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. **176**, 399–409 (2018)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. **38**, 431–446 (2000)
Bot, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. European J. Oper. Res. **287**, 49–60 (2020)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. **26**, 827–845 (2011)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. **148**, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization **61**, 1119–1132 (2012)
Thong, D.V., Shehu, Y., Iyiola, O.S.: Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer. Algorithms (2019). DOI:10.1007/s11075-019-00780-0
Thong, D.V., Vuong, P.T.: Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities. Optimization **68**, 2207–2226 (2019)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods, USSR. Comput. Math. Math. Phys. **4**, 1–17 (1964)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. **2**, 183–202 (2009)
Gibali, A., Hieu, D.V.: A new inertial double-projection method for solving variational inequalities. J. Fixed Point Theory Appl. **21**, 97 (2019)
Zhou, Z., Tan, B., Li, S.: A new accelerated self-adaptive stepsize algorithm with excellent stability for split common fixed point problems. Comput. Appl. Math. **39**, Article ID 220 (2020)
Thong, D.V., Hieu, D.V., Rassias T.M.: Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. **14**, 115–144 (2020)
Thong, D.V., Hieu D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms **78**, 1045–1060 (2018)
Yang, J., Liu, H.: Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms **80**, 741–752 (2019)
Fan, J., Qin, X.: Weak and strong convergence of inertial Tseng’s extragradient algorithms for solving variational inequality problems. Optimization (2020). DOI:10.1080/02331934.2020.1789129
Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. **75**, 281–295 (1992)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. **75**, 742–750 (2012)
Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern Syst Anal. **51**, 757–765 (2015)
Beck, A., Guttmann-Beck, N.: FOM—a MATLAB toolbox of first-order methods for solving convex optimization problems. Optim. Methods Softw. **34**, 172–193 (2019)
Hieu, D.V., Cho, Y.J., Xiao, Y.-b., Kumam, P.: Relaxed extragradient algorithm for solving pseudomonotone variational inequalities in Hilbert spaces. Optimization (2019). DOI:10.1080/02331934.2019.1683554
Preininger, J., Vuong, P.T.: On the convergence of the gradient projection method for convex optimal control problems with bang-bang solutions. Comput. Optim. Appl. **70**, 221–238 (2018)
Vuong, P.T., Shehu, Y.: Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms **81**, 269-291 (2019)
Hieu, D.V., Strodiot, J.J., Muu, L.D.: Strongly convergent algorithms by using new adaptive regularization parameter for equilibrium problems. J. Comput. Appl. Math. 2020. Article ID 112844.
Pietrus, A., Scarinci, T., Veliov, V.M.: High order discrete approximations to Mayer’s problems for linear systems. SIAM J. Control Optim. **56**, 102–119 (2018)
Bressan, B., Piccoli, B.: Introduction to the mathematical theory of control, AIMS series on applied mathematics (2007)
|
---
address:
- |
P. Kern\
Mathematisches Institut\
Heinrich Heine University\
40225 Düsseldorf\
Germany\
- |
M. M. Meerschaert\
Department of Statistics and Probability\
Michigan State University\
East Lansing, Michigan 48824\
USA\
- |
H.-P. Scheffler\
Fachbereich Mathematik\
University of Siegen\
57068 Siegen\
Germany\
author:
-
-
-
title: Correction Limit theorems for coupled continuous time random walks
---
*Ann. Probab.* **32** (2004) 730–756
,
\
and
The converse portion of Theorem 2.2 requires an additional condition, that the probability measure $\omega$ is such that (2.10) assigns finite measure to sets bounded away from the origin. The argument on page 735 must consider $B_1$ and $B_2$ such that [at least one]{} is bounded away from zero, not just the case where both are bounded away from zero. The condition on $\omega$ ensures that the integral on page 735 l.–2 is finite, which is obviously necessary.
The limit process in Theorem 3.4 should read $A(E(t)-)$. If $A(t)$ and $D(t)$ are dependent, this is a different process than $A(E(t))$. To clarify the argument, note that $$\label{mistake}\qquad
\lim_{h\downarrow0}\frac1hP\{A(s)\in M, s<E(t)\leq s+h\}= P\{A(s-)\in
M | E(t)=s\} p_t(s),$$ where $p_t$ is the density of $E(t)$, since $s<E(t)$ in the conditioning event. For an alternative proof, see Theorem 3.6 in Straka and Henry [@StrakaHenry]. Theorem 4.1 in [@coupleCTRW] gives the density of $A(E(t)-)$. Examples 5.2–5.6 in [@coupleCTRW] provide governing equations for the CTRW limit process $M(t)=A(E(t)-)$ in some special cases with simultaneous jumps. Especially, Example 5.5 considers the case where $Y_i=J_i$ so that $A(t)$ is a stable subordinator and $E(t)=\inf\{
x>0\dvtx A(x)>t\}$ is its inverse or first passage time process. The beta density for $A(E(t)-)$ given in that example agrees with the result in Bertoin [@bertoin], page 82. Note that here we have $A(E(t)-)<t$ and $A(E(t))>t$ almost surely for any $t>0$, by [@bertoin], Chapter III, Theorem 4.
[4]{}
, (). . .
(). . . , .
(). . .
|
---
abstract: 'This paper presents the results of a search for brown dwarfs in the Upper Scorpius Association using data from the UKIRT Infrared Deep Sky Survey (UKIDSS) Galactic Cluster Survey. Candidate young brown dwarfs were first chosen by their position in colour magnitude diagrams with further selection based on proper motions to ensure Upper Scorpius membership. Proper motions were derived by comparing UKIDSS and 2MASS data. Using that method we identify 19 new brown dwarfs in the southern part of the association. In addition there are up to 8 likely members with slightly higher dispersion velocity. The ratio of brown dwarfs to stars was found to be consistent with other areas in Upper Scorpius. It was also found to be similar to other results from young clusters with OB associations, and lower than those without, suggesting the brown dwarf formation rate may be a function of environment.'
author:
- |
P. Dawson$^{1}$[^1] and A. Scholz$^{1}$ and T.P. Ray$^{1}$\
$^{1}$School of Cosmic Physics, Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland
date: 'Accepted 2011 xxxxxxx xx. Received 2011 xxxxxxx xx; in original form 2011 April 25'
title: New brown dwarfs in the south part of the Upper Scorpius Association
---
\[firstpage\]
techniques: photometric – techniques: brown dwarfs – open clusters and associations: individual: Upper Scorpius – infrared: stars.
Introduction
============
The shape of the initial mass function (IMF) is well established and constrained for stars greater than 0.5$M_{\sun}$. In comparison, determining the low mass part of the IMF, particularly in the substellar regime below $\sim $0.08$M_{\sun}$, has proven more challenging. In this mass range, the mass function may be affected by turbulent fragmentation, dynamical interactions, fragmentation of massive disks, photo-erosion of cores or other processes (see reviews by @whi07 [@bon07]). Hence, in some theoretical scenarios, there could be wide variations in the form of the IMF below about 0.3$M_{\sun}$ depending on the environment. To test these ideas, it is essential to carry out surveys for brown dwarfs in diverse environments.
Brown dwarfs are difficult to observe because they cool down rapidly with age. After 1 Myr a very low-mass star near the hydrogen burning boundary will have a luminosity only slightly greater than that of the highest mass brown dwarf, but after 1 Gyr the brown dwarf will have a luminosity an order of magnitude below the star [@opp99]. Because of their low temperatures and luminosity, searching for them is best done in the near infra-red in nearby young open clusters and star forming regions. The availability of wide-field near-infrared surveys such as 2MASS and UKIDSS thus greatly facilitates searches for substellar objects.
Most nearby star forming regions have been searched for brown dwarfs over the past decade (see review by @luh07). In some clusters, the surveys have revealed a population of objects with masses below the Deuterium burning limit of 0.015$M_{\sun}$ [@zap00; @lar00]. The ratio between the number of low-mass stars (0.08-1.0$\,M_{\sun}$) and the number of brown dwarfs (0.03-0.08$\,M_{\sun}$) – an empirical constraint on the IMF – has been determined to be between 3.3 and 8.5 [@and08] and in one region 1.5$\pm$0.3 [@sch09]. This might be a first indication for environmental effects on the IMF.
As of today, the brown dwarf surveys in star forming regions suffer from two problems: a) The surveys are incomplete at the low mass end, primarily due to strong and variable extinction in the molecular clouds. b) Most nearby star forming regions are rather similar in their physical characterics, for example, most of them do not harbour massive stars.
Here we report on a new brown dwarf survey in a part of the Upper Scorpius (hereafter UpSco) star forming region. UpSco is a favorable area for such a project, because it suffers from negligible extinction. At a distance of $145\pm2$pc [@dez99] it is the nearest OB assocation, and it represents our best chance of constraining the impact of massive stars on the formation of very low mass objects. With an age of about 5 Myr [@pre02] UpSco is the youngest part of the Scorpius Centaurus Association, i.e. it has one of the better combinations of proximity and youth for a successful brown dwarf search. UpSco is spread over approximately 250deg$^2$ of the sky but wide-field surveys now cover a significant portion of this area.
This paper analyses an infra-red 12deg$^2$ survey of part of UpSco, which lies roughly within R.A. 15h 40m to 16h 20m and Dec. -30 to -27 and is generally free of extinction with $A_{\rm V} < 2.0$ [@ard00]. Despite its youth (stars later than F type have yet to reach the main sequence) star formation appears to have finished in the association within 1 Myr of commencing [@pre08; @dez99] so all members of the association are coeval. In Section 2 details of the survey and the data obtained are presented. Section 3 describes the selection of young brown dwarf candidates from photometric and proper motion analysis. The results of this selection are used to analyse the IMF of Upper Scorpius in Section 4. Finally, in Section 5, our conclusions are drawn.
Survey And Data Sets
====================
The United Kingdom Infra-Red Telescope (UKIRT) is currently conducting the United Kingdom Infra-red Deep Sky Survey (UKIDSS) - the results of which are being made available in a series of releases. This work used the 8th Data Release (DR8Plus).
0 ![Coverage in Z, Y, J, H and K filters of 28deg$^2$ in Upper Scorpius from the UKIDSS GCS DR8Plus. Open circles mark the location of the 19 brown dwarf candidates found in this work. Open diamonds mark the position of spectroscopically confirmed brown dwarfs found in other studies [@mar04; @sle06; @lodieu08; @lodieu11]. The search method used in this survey of the southernmost area covered by UKIDSS was also applied to 10.5deg$^2$ covered by two areas to the north in order to test its reliability. The 26 brown dwarfs shown above were recovered.](mnras_uscobdfig1.eps "fig:"){width="45.00000%"}
UKIDSS is made up of several components but the one of interest here is the Galactic Cluster Survey (GCS). Described in detail in @lawrence07 the GCS is a survey of ten large open star clusters and star forming regions, including UpSco. One of the primary goals of the GCS is to conduct a census of very low mass brown dwarfs in order to investigate the form of the sub-stellar IMF.
The GCS takes infra-red images via five passband filters - Z, Y, J, H and K with effective wavelengths of $0.88\mu$m, $1.03\mu$m, $1.25\mu$m, $1.63\mu$m, and $2.20\mu$m respectively, and magnitude limits of Z=20.4, Y=20.3, J= 19.5, H=18.6 and K=18.6. The instrument used to take the images is the Wide Field Camera (WFCAM). Data collected by the WFCAM is subject to an automated process that detects and parameterises objects and performs photometric and astrometric calibrations. The resulting reduced image frames and catalogues are then placed in the WFCAM Science Archive (WSA). The WSA can be interrogated using Structured Query Language (SQL).
A set of five papers provide the reference technical documentation for UKIDSS. @cas07 presents technical details of the WFCAM, @hod09 describes the WFCAM photometric system, @ham08 describes the WSA and offers instruction on how to extract information from it using SQL. As previously mentioned @lawrence07 presents the details of the different UKIDSS surveys, including the GCS. The fifth paper [@irw09] will describe the details of the data reduction pipeline which is run by the Cambridge University Astronomical Survey Unit (CASU), but sufficient information for an overview of the data reduction pipeline can be gleaned from the other four papers and by referring to @dye06 and @war07.
As shown in figure 1, the area in UpSco investigated here and surveyed for DR8Plus covers 12deg$^2$. The data for objects in the target area were obtained via an SQL query (see Appendix A for a typical query) to the UKIDSS GCS database. All queries were structured to include only point source objects in order to avoid contamination by extended sources (e.g. relatively nearby galaxies). Objects in the WSA are given what is known as a discrete image classification, with point sources having values between -2 and -1. The lines in the query that refer to “passband”class, e.g. zclass, values of between -2 and -1, are designed to filter out extended sources. Note that requiring this value to be between -2 and -1 in every passband may exclude some sources with very low signal to noise ratios. As every object with photometric characteristics consistent with a brown dwarf had its proper motion assessed, in order to check whether it is likely a member of UpSco, each query submitted also correlated all objects found in the UKIRT GCS databases with those found in 2MASS databases [@skrutskie06]. The 2MASS data is used as a first epoch for the purposes of proper motion calculation.
Selection Of Brown Dwarf Candidate Members Of Upper Scorpius Association
========================================================================
Photometry
----------
----------------------------------------------------------------- -- -- --
 
 
----------------------------------------------------------------- -- -- --
The query shown in Appendix A was submitted to the WSA. The query returned 282,938 objects and the colour magnitude diagrams shown in figure 2 were plotted. Known brown dwarfs from other studies [@mar04; @sle06; @lodieu08; @lodieu11] are shown as open diamonds and the 19 brown dwarf candidates found in this study are shown as open circles. Theoretical isochrones for 5 Myr old sub-stellar objects are also shown over-plotted on the diagrams. These isochrones are based on the DUSTY models derived by @cbah00 and obtained from both I. Baraffe (private communication) and N. Lodieu (private communication). The isochrones were computed by I. Baraffe using the UKIDSS filter profile. Reddening caused by extinction shifts the position of objects to the right and down on colour magnitude diagrams. Therefore all brown dwarf candidates should be either on or to the right hand side of the isochrones. The query limited selection to objects with magnitudes in Z greater than 14.0. This choice of a limiting magnitude was motivated in part by an examination of colour magnitude diagrams, including those in figure 2, which showed that at brighter magnitudes the isochrones for the young brown dwarf/very low mass star sequence of objects were no longer sufficiently distinct from other objects on the diagrams. Also, the DUSTY models of @cbah00 indicated that this choice would put an upper limit on the mass selected of 0.09$M_{\sun}$, massive enough to make sure of including 5 Myr old brown dwarfs at 145pc distant. Note that WFCAM Z is on a Vega system, so it is not directly comparable with the SDSS z magnitudes on the AB system.
Evident from figure 2 is that some colour magnitude diagrams show a much clearer separation between brown dwarfs and main sequence stars than others. The (Z-J,Z) colour magnitude diagram shows the separation clearly and was chosen as the basis for the photometric cut. Thus to further refine the search, a new query was submitted to the WSA eliminating all objects to the left of the line from (Z-J,Z) = (1.0, 14.0) through (1.4, 16.6) to (3.0, 21.55) (dashed line in figure 2). This query left 51 objects which were examined again in the (Z-J,Z) colour magnitude diagram. 17 of the objects to the left of the line (Z-J,Z) = (1.1, 14.0) through (1.1, 14.3), (1.2, 14.9), (1.3, 15.2) to (1.6, 17.0) were rejected for being too far from the isochrone on the blue side, leaving 34 photometric candidates. Most of the candidates are slightly redder than predicted by the isochrones, which could be due to extinction or problems with the isochrones.
Proper Motion
-------------
The 34 photometric candidates were then examined to find their proper motion. Proper motions were calculated using the query shown in appendix A. The difference in position of the objects in the GCS and 2MASS catalogues is obtained in milliarcseconds. This is then divided by the difference in the two epochs, converted from Julian dates to years. The lines in the query that list their results “as pmRA” and “as pmDEC” perform this task. The resulting vector point diagram is shown in figure 3. The known proper motions of UpSco in right ascension and declination are about -11mas/yr and -25mas/yr respectively [@deb97; @pre98]. Of the 34 candidates, 1 was too faint to be recorded in 2MASS leaving 33 candidates with proper motion data calculated. The remaining 33 candidates included 6 with proper motions greater than the range of figure 3 (Table 2). These 6 objects might be red or brown dwarfs located much closer to the Sun than UpSco. @deb99 notes that the velocity dispersion in UpSco is very small at 1.3km/s, corresponding to about 2mas/yr.
The greatest contribution to the spread in the proper motions therefore comes from errors in UKIDSS and 2MASS measurements. To assess the errors the original selection of 282,938 objects had their proper motions examined. The proper motions were found to have a normal distribution about the origin with a standard deviation of 10.2mas/yr in both right ascension and declination. Factoring this error back into the @deb99 figure noted above showed that a 2$\sigma$ selection circle for UpSco members would have a radius of 20.8mas/yr. This error has only a slight dependence on magnitude for objects with a magnitude in Z between 14.0 and 17.0. For the objects fainter than this the standard deviation is $\approx$20mas/yr. There are 3 objects among the final 27 candidates with magnitudes in Z greater than 17.0.
All 27 candidates shown in figure 3 are predominantly centred around the (-11,-25) position. The 3 candidates with magnitudes in Z greater than 17.0 noted above are marked in red. There is no clustering of objects around the (0,0) position indicating that the sample is not contaminated by more distant objects e.g. AGB stars which have similar surface temperatures and colours to brown dwarfs, but much greater intrinsic luminosities. The 19 candidates within the 2$\sigma$ selection circle were then classified as members of UpSco. These objects so selected (Table 1) have the photometric and proper motion characteristics of a 5 Myr old brown dwarf member of UpSco. Given that there are 19 objects within the 2$\sigma$ selection circle statistically it is to be expected that possibly 1 of the 8 objects outside is also a brown dwarf member of UpSco. However all of the other 8 objects shown in figure 3 are clustered immediately outside the 2$\sigma$ selection circle and not scattered around the vector point diagram as would be expected for random contaminants. As these 8 objects have the same range of magnitudes as the 19 within the selection circle (Table 1) they are not subject to any systemically larger proper motion errors caused by being fainter. Thus it is likely that these 8 objects are also members of UpSco with slightly higher dispersion velocity. Finally we note that none of the brown dwarf candidates listed here have been identified before in previous surveys.
Given the low contamination in our sample with proper motions, the one object which is not detected in 2MASS (Table 1) has a high likelihood of being a brown dwarf in UpSco as well (28/34, i.e. 82%).
Estimate of Contamination/Completeness
--------------------------------------
In order to be certain that the candidates found in this study are in fact brown dwarfs spectra should be obtained. However, figure 3 indicates that there is negligible contamination from background stars among the sample.
As a further test of the method outlined above, it was also used to investigate the 10.5deg$^2$ area covered by UKIDSS in the two areas shown to the north of figure 1. The area is also part of UpSco [@dez99] and therefore any brown dwarf here will share similar photometric and proper motion characteristics of those from the area to the south. After the analysis of the 10.5deg$^2$ area was complete, 49 objects in it were identified as possible brown dwarfs. All 49 were previously identified by @lodieu07 [@lodieu08] as brown dwarf candidates. Spectra have been taken of 26 of the 49 objects [@mar04; @sle06; @lodieu08; @lodieu11] and all 26 have been confirmed as brown dwarfs. This result underlines the reliability of the method as a means of discriminating brown dwarfs from other objects in UpSco.
In order to estimate completeness levels in all passbands the data from the original 282,938 objects was analysed. Objects were grouped in bins of 0.1 magnitude and examined to see where numbers detected in each bin began to fall. The resulting estimates of 100% completeness were: Z=18.0, Y=17.4, J=17.2, H=16.2, K=16.1. In the Z and J passbands these would be the expected magnitudes of UpSco member objects in the 0.01 - 0.02$M_{\sun}$ mass range. The histograms in all five passbands showed that completeness fell gradually at these magnitudes and was still at an 80% level another magnitude deeper. Note that the lower mass range achieved in this survey is not limited by the completeness of UKIDSS but by that of 2MASS (J$\approx$16) due to the need for proper motion measurements. However, of the 34 photometric candidates, only one object was fainter than the sensitivity limit of 2MASS and did not have its proper motion calculated.
------------------------- ------------- ------------- -------- -------- -------- -------- -------- ------------------------- ----------------
Name R.A. Dec. Z Mag. Y Mag. J Mag. H Mag. K Mag. $\mu_{\alpha}cos\delta$ $\mu_{\delta}$
mas/yr mas/yr
2MASSJ15582376-2721435 15:58:23.76 -27:21:43.7 14.35 13.72 13.07 12.60 12.22 -17.20 -19.05
2MASSJ16090168-2740521 16:09:01.68 -27:40:52.3 14.33 13.60 12.86 12.35 11.89 -8.75 -17.46
2MASSJ16035573-2738248 16:03:55.73 -27:38:25.1 15.19 14.48 13.80 13.28 12.88 -12.64 -28.09
2MASSJ15585793-2758083 15:58:57.93 -27:58:08.5 15.13 14.45 13.81 13.31 12.93 -9.46 -20.79
2MASSJ15531698-2756369 15:53:16.98 -27:56:37.2 15.53 14.68 13.96 13.45 13.04 -14.61 -33.48
2MASSJ15551960-2751207 15:55:19.59 -27:51:21.0 15.59 14.77 14.02 13.51 13.11 -19.02 -39.21
2MASSJ15501958-2805237 15:50:19.58 -28:05:23.9 16.04 15.27 14.56 14.05 13.66 -5.02 -22.51
2MASSJ15583403-2803243 15:58:34.03 -28:03:24.5 15.21 14.46 13.72 13.17 12.73 -3.61 -20.20
2MASSJ16005265-2812087 16:00:52.66 -28:12:09.0 15.03 14.27 13.57 13.04 12.66 -0.38 -21.28
2MASSJ15492909-2815384 15:49:29.08 -28:15:38.6 14.29 13.62 12.96 12.47 12.06 -14.25 -22.20
2MASSJ15493660-2815141 15:49:36.59 -28:15:14.3 14.66 14.02 13.39 12.91 12.52 -11.87 -24.36
2MASSJ16192399-2818374 16:19:23.99 -28:18:37.5 15.29 14.52 13.78 13.31 12.90 -5.33 -8.51
2MASSJ15490803-2839550 15:49:08.02 -28:39:55.2 14.82 14.23 13.60 13.09 12.72 -19.86 -22.31
2MASSJ15485777-2837332 15:48:57.76 -28:37:33.4 17.99 16.79 15.80 15.20 14.55 -17.24 -17.39
2MASSJ16195827-2832276 16:19:58.26 -28:32:27.8 18.74 17.29 16.18 15.39 14.74 -27.34 -22.75
2MASSJ15544486-2843078 15:54:44.85 -28:43:07.9 15.51 14.79 14.12 13.61 13.22 -15.80 -12.59
2MASSJ15591513-2840411 15:59:15.12 -28:40:41.3 14.13 13.56 12.96 12.49 12.15 -11.40 -15.19
2MASSJ16062870-2856580 16:06:28.70 -28:56:58.2 14.90 14.21 13.52 13.01 12.63 -7.91 -16.46
2MASSJ16101316-2856308 16:10:13.15 -28:56:31.0 15.67 14.81 14.06 13.54 13.11 -10.36 -18.99
2MASSJ16051544-2802520 16:05:15.44 -28:02:52.0 17.92 16.66 15.69 15.05 14.47 -7.11 -3.30
2MASSJ15552513-2801085 15:55:25.11 -28:01:08.8 14.12 13.51 12.88 12.47 12.01 -38.10 -26.24
2MASSJ15502934-2835535 15:50:29.32 -28:35:53.9 16.05 15.32 14.59 14.03 13.63 -36.07 -33.63
2MASSJ16190983-2831390 16:19:09.82 -28:31:39.5 16.63 15.67 14.70 14.17 13.67 -13.41 -47.04
2MASSJ16035601-2743335 16:03:56.00 -27:43:33.6 14.41 13.91 13.26 12.64 12.29 -19.94 -3.33
2MASSJ16145936-2826214 16:14:59.37 -28:26:21.8 14.68 14.12 13.50 12.83 12.48 +10.75 -48.05
2MASSJ15551768-2856579 15:55:17.70 -28:56:58.1 14.32 13.80 13.19 12.66 12.33 +14.02 -13.91
2MASSJ15504920-2900030 15:50:49.19 -29:00:03.1 14.35 13.82 13.21 12.64 12.34 -32.06 -10.49
UGCSJ154723.32-272907.3 15:47:23.33 -27:29:07.3 19.27 17.91 16.76 15.97 15.40 —– —–
------------------------- ------------- ------------- -------- -------- -------- -------- -------- ------------------------- ----------------
{width="45.00000%"}
The substellar population in Upper Scorpius
===========================================
We re-examined the 19 high probability members listed in Table 1 in the (Z-J,Z) colour magnitude diagram and assigned masses based on their Z band magnitude. They cover a mass range from 0.01 to 0.09$M_{\sun}$, 7 are below 0.03$M_{\sun}$ and 2 of those are below 0.02$M_{\sun}$. The 100% completeness limit in the Z and J passbands corresponds to a mass of less than 0.02$M_{\sun}$. At 0.01$M_{\sun}$ the photometric survey is still at least 80% complete.
@and08 performed a combined analysis of the low-mass IMF in seven star forming regions, not including UpSco. The method used a ratio of stars with masses 0.08 - 1.0$M_{\sun}$ to brown dwarfs with masses 0.03 - 0.08$M_{\sun}$ (30 - 80$M_{\rm J}$) from each region to allow for direct comparison.
In order to follow that method a new query was submitted to the WSA to extract UpSco member stars in the mass range 0.09 - 1.0$M_{\sun}$. This new query returned 11,041 objects which were then examined in a (Z-J,Z) colour magnitude diagram. Unlike the brown dwarfs, the space the UpSco stars occupy in the colour magnitude diagram is not as distinct from the main sequence. The isochrones used to guide this selection of UpSco members were the DUSTY isochrone used previously and the NexGen isochrone also derived from the models of @cbah00. Initially, all objects to the left of a line from (0.75,10.32) to (1.17,14.15) were eliminated from consideration.
The remaining objects then had their proper motion examined in the vector point diagram as shown in figure 4. The UpSco cluster motion is clearly identifiable but is not as free from contamination as the brown dwarf selection was. To estimate contamination, the number of objects contained within selection circles of similar size centred on the points (11,25),(25,-11) and (-25,11) were counted (see figure 4). 22 objects were found in all three circles so the number of contaminants in the original selection circle was estimated at 7. This left a sample of 37 members of the UpSco within the mass range 0.09 - 1.0$M_{\sun}$ extracted from an initial selection of 11,041 objects. Among the brown dwarfs in Table 1, 11 are judged to be in the range 0.03 - 0.08$M_{\sun}$. Note that the brown dwarf numbers are extracted from the 19 objects within the 2$\sigma$ selection circle only. This is to ensure that both star and brown dwarf numbers in the ratios are arrived at using the same criteria. So the ratio of low-mass stars to brown dwarfs in the selected mass range was found to be 38/11 = 3.5$_{-1.3}^{+2.0}$ (errors are Poissonian).
The same analysis was then done for 10.5deg$^2$ to the north where 49 brown dwarf candidates had been identified. 26 objects were assigned masses of between 0.03 - 0.08$M_{\sun}$. Stars in the mass range 0.08 - 1.0$M_{\sun}$ and deemed UpSco members because of their photometry and their proper motion, as shown in figure 4, numbered 102. So the ratio of stars to brown dwarfs in those areas was found to be 102/26 = 3.9$_{-0.9}^{+1.4}$, consistent with the south. Overall, the combined figures give a ratio of 140/37 = 3.8$_{-0.8}^{+1.1}$ (again, errors are Poissonian).
These numbers can be compared with the ratios published by @and08 [their Table 1]. From their sample of 7 clusters, we exclude the Pleiades because it is significantly older than all other regions and Mon R2 which has a small population of members causing large statistical uncertainties. Only two of the remaining clusters belong to OB associations (ONC, NGC2024), and they feature the lowest star/brown dwarf ratios in the sample (3.3 and 3.8). These values are similar to the ones derived for the OB association UpSco (3.5 and 3.9, see above). In contrast the clusters without OB associations (Chamaeleon, Taurus, IC348) in the sample of @and08 have higher star to brown dwarf ratios of 4.0, 6.0 and 8.3, respectively.
Thus, based on the current data it seems possible that the presence of OB stars is related to a low star to brown dwarf ratio, i.e. a higher abundancy of brown dwarfs. This could be a sign that the radiation field of OB stars favours the formation of brown dwarfs. It has been suggested that substellar and planetary-mass objects can be formed via photoerosion of cores by the ionizing radiation from an OB star [@waz04]. At face value this would provide an additional formation channel for brown dwarfs in OB associations, lowering the star to brown dwarf ratio. Given the substantial uncertainties in these ratios, this conclusion is certainly preliminary and needs to be substantiated by future surveys.
----------------------------------------------------------------- --
 
----------------------------------------------------------------- --
------------------------ ------------- ------------- -------- -------- -------- -------- -------- ------------------------- ----------------
Name R.A. Dec. Z Mag. Y Mag. J Mag. H Mag. K Mag. $\mu_{\alpha}cos\delta$ $\mu_{\delta}$
mas/yr mas/yr
2MASSJ15583064-2802357 15:58:30.63 -28:02:36.3 16.53 15.75 14.99 14.47 14.03 -28.27 -66.30
2MASSJ16035915-2806086 16:03:58.76 -28:06:09.6 16.78 15.82 15.02 14.46 14.00 -2.19 -115.84
2MASSJ15530915-2828366 15:53:09.13 -28:28:37.3 14.81 14.18 13.51 13.00 12.63 -46.20 -75.62
2MASSJ15492508-2843527 15:49:24.85 -28:43:51.6 14.15 13.49 12.84 12.39 12.02 -384.23 +138.80
2MASSJ16151681-2907007 16:15:16.28 -29:07:01.3 14.10 13.40 12.86 12.23 11.96 +15.06 -60.37
2MASSJ15481934-2748512 15:48:19.30 -27:48:51.3 19.77 17.56 16.73 16.15 15.83 -74.00 -8.34
------------------------ ------------- ------------- -------- -------- -------- -------- -------- ------------------------- ----------------
Conclusions
===========
We have carried out a survey for brown dwarfs in the 5Myr old UpSco star forming region based on photometry and proper motions from a combination of the UKIDSS Galactic Cluster Survey and 2MASS. 19 new substellar objects with estimated masses between 0.01 and 0.09$M_{\sun}$ are identified. These objects are located in the southern part of the association which has not been covered by previous brown dwarf surveys. 8 other objects with slightly higher proper motion have also been identified. These may be UpSco members with slightly higher dispersion velocity than the stellar members. Although spectroscopic confirmation has not been obtained yet, the level of contamination appears negligible.
The ratio of stars to brown dwarfs in the South of UpSco was found to be 3.5$_{-1.3}^{+2.0}$, in the same range as elsewhere in UpSco. Comparing with literature findings, young clusters with OB associations tend to have lower ratios than clusters without OB stars, which might indicate that brown dwarf formation is a function of environment.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank Nicolas Lodieu of the Instituto de Astrofisica de Canarias and Isabelle Baraffe of the Centre de Recherche Astrophysique de Lyon for supplying model data. This work was supported by the Science Foundation Ireland within the Research Frontiers Programme under grant no. 10/RFP/AST2780. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We would also like to thank the UKIDSS Team for the excellent database they have made available to the community.
[99]{}
Andersen M., Meyer M. R., Greissl J., Aversa A., 2008, ApJ, 683, L183.
Ardila D., Martín E., Basri G., 2000, AJ, 120, 479.
Barrado y Navascues D., Bouvier J., Stauffer J.R., Lodieu N., McCaughrean M.J., 2002, A& A, 395, 813B.
Bejar V.J.S., Martin E.L., Zapatero Osorio M.R., Rebolo R., Barrado y Navascues D., Bailer-Jones C.A.L., Mundt R., Baraffe I., Chabrier G., Allard F., 2001, ApJ, 556, 830.
Bonnell I.A., Larson R.B., Zinnecker H., 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, K. Keil (Tucson: Univ. Arizona Press), 149.
Casali M., Adamson A., Alves de Oliveira C., Almaini O., Burch K., Chuter T., Elliot J., Folger M., Foucaud S., Hambly N., Hastie M., Henry D., Hirst P., Irwin M., Ives D., Lawrence A., Laidlaw K., Lee D., Lewis J., Lunney D., McLay S., Montgomery D., Pickup A., Read M., Rees N., Robson I., Sekiguchi K., Vick A., Warren S., Woodward B., 2007, A& A, 467, 777.
Chabrier G., Baraffe I., Allard F., Hauschildt P., 2000, ApJ, 542, 464.
de Bruijne J.H.J., Hoogerwerf R., Brown A.G.A., Aguilar L.A., de Zeeuw P.T., 1997, in ESA SP-402: Hipparcos - Venice ’97 Improved Methods for Identifying Moving Groups. pp 575-578
de Bruijne J.H.J., 1999, MNRAS, 310, 585.
de Zeeuw P.T., Hoogerwerf R., de Bruijne J.H.J., Brown A.G.A., Blaauw A., 1999, AJ, 117, 354.
Dye S., Warren S.J., Hambly N.C., Cross N.J.G., Hodgkin S.T., Irwin M.J., Lawrence A., Adamson A.J., Almaini O., Edge A.C., Hirst P., Jameson R.F., Lucas P.W., van Breukelen C., Bryant J., Casali M., Collins R.S., Dalton G.B., Davies J.I., Davis C.J., Emerson J.P., Evans D.W., Foucaud S., Gonzales-Solares E.A., Hewett P.C., Kendall T.R., Kerr T.H., Leggett S.K., Lodieu N., Loveday J., Lewis J.R., Mann R.G., McMahon R.G., Mortlock D.J., Nakajima Y., Pinfield D.J., Rawlings M.G., Read M.A., Riello M., Sekiguchi K., Smith A.J., Sutorius E.T.W., Varricatt W., Walton N.A., Weatherley S.J., 2006, MNRAS, 372, 1227.
Hambly N.C., Collins R.S., Cross N.J.G., Mann R.G., Read M.A., Sutorius E.T.W., Bond I., Bryant J., Emerson J.P., Lawrence A., Rimoldini L., Stewart J.M., Williams P.M., Adamson A., Hirst P., Dye S., Warren S.J., 2008, MNRAS, 384, 637.
Hennebelle P., Chabrier G., 2008, ApJ, 684, 395.
Hodgkin S.T., Irwin M.J., Hewett P.C., Warren S.J., 2009, MNRAS, 394, 675.
Irwin M.J. et al., in preparation.
Kroupa. P., 2001, MNRAS, 322, 231.
Kroupa. P., 2002, Science, 295, 82.
Lawrence A., Warren S.J., Almaini O., Edge A.C., Hambly N.C., Jameson R.F., Lucas P., Casali M., Adamson A., Dye S., Emerson J.P., Foucaud S., Hewett P., Hirst P., Hodgkin S.T., Irwin M.J., Lodieu N., McMahon R.G., Simpson C., Smail I., Mortlock D., Folger M., 2007, MNRAS, 379, 1599.
Lodieu N., Hambly N.C., Jameson R.F., Hodgkin S.T., Carraro G., Kendall T.R., 2007, MNRAS, 374, 372.
Lodieu N., Hambly N.C., Jameson R.F., Hodgkin S.T., 2008, MNRAS, 383, 1385.
Lodieu N., Dobbie P.D., Hambly N.C., 2011, A&A, 527A, 24L.
Lucas P. W., Roche P. F., 2000, MNRAS, 314, 858L.
Luhman K. L., Joergens V., Lada C., Muzerolle J., Pascucci I., White R., 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, K. Keil (Tucson: Univ. Arizona Press), 443.
Martin E.L., Delfosse X., Guieu S., 2004, AJ, 127, 449.
Oliveira J.M., Jeffries R.D., van Loon J. Th., 2009, MNRAS, 392, 10340.
Oppenheimer B.R., Kulkarni S.R., Stauffer J. R., 1999, in Mannings V., Boss A., Russell S., eds, Protostars and Planets IV, Tucson: Univ. Arizona Press.
Preibisch T., Guenther E., Zinnecker H., Sterzik M., Frink S., Roser S., 1998, A&A, 333, 619.
Preibisch T., Brown A.G.A., Bridges T., Guenther E., Zinnecker H., 2002, AJ, 124, 404.
Preibisch T., Mamajek E., 2008, in Handbook of Star Forming Regions, Vol, II, ed B. Reipurth (San Francisco, CA: ASP), 235.
Scholz A., Geers V., Jayawardhana R., Fissel L., Lee E., Lafreniere D., Tamura M., 2009, ApJ, 702, 805S.
Skrutskie M.F., Cutri R.M., Stiening R., Weinberg M.D., Schneider S., Carpenter J.M., Beichman C., Capps R., Chester T., Elias J., Huchra J., Liebert J., Lonsdale C., Monet D.G., Price S., Seitzer P., Jarrett T., Kirkpatrick J.D., Gizis J., Howard E., Evans T., Fowler J., Fullmer L., Hurt R., Light R., Kopan E.L., Marsh K.A., McCallon H.L., Tam R., Van Dyck S., Wheelock S., 2006, AJ, 131, 1163.
Slesnick C.L., Carpenter J.M., Hillenbrand L.A., 2006, AJ, 131, 3016.
Tej A., Sahu K.C., Chandrasekhar T., Ashok N.M., 2002, ApJ, 578, 523.
Thies I., Kroupa P., 2008, MNRAS, 390, 1200.
Warren S.J., Hambly N.C., Dye S., Almaini O., Cross N.J.G., Edge A.C., Foucaud S., Hewett P.C., Hodgkin S.T., Irwin M.J., Jameson R.F., Lawrence A., Lucas P.W., Adamson A.J., Bandyopadhyay R.M., Bryant J., Collins R.S., Davis C.J., Dunlop J.S., Emerson J.P., Evans D.W., Gonzales-Solares E.A., Hirst P., Jarvis M.J., Kendall T.R., Kerr T.H., Leggett S.K., Lewis J.R., Mann R.G., McLure R.J., McMahon R.G., Mortlock D.J., Rawlings M.G., Read M.A., Riello M., Simpson C., Smith D.J.B., Sutorius E.T.W., Targett T.A., Varicatt W.P., 2007, MNRAS, 375, 213.
Weidner C., Kroupa P., 2006, MNRAS, 365, 1333.
Whitworth A., Zinnecker H., 2004, A&A, 427, 299.
Whitworth A., Bate M.R., Nordlund A., Reipurth B., Zinnecker H., 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, K. Keil (Tucson: Univ. Arizona Press), 459.
Zapatero Osorio M. R., Bejar V.J.S., Martin E.L., Rebolo R., Barrado y Navascues D., Bailer-Jones, C.A.L., Mundt R., 2000, Science, 290, 103.
Sample SQL Query
================
Shown below is the SQL query submitted to the WSA to find the first set of sources in the Upper Scorpius association. The query returned 282,938 rows of data.
Select\
g.ra, g.dec, zmypnt, ymjpnt, jmhpnt, hmk\_1pnt, zapermag3, yapermag3, japermag3, hapermag3, k\_1apermag3, 3.6e6\*cos(radians(g.dec))\*\
(g.ra-T2.ra)/((mj.mjdobs - T2.jdate+2400000.5)/365.25) as\
pmRA, 3.6e6\*(g.dec-T2.dec)/\
((mj.mjdobs - T2.jdate+2400000.5)/365.25) as pmDEC From\
gcsmergelog as I, multiframe as mj, (Select t.ra as ra, t.dec as dec, x.slaveobjid as slaveobjid, x.masterobjid as masterobjid, t.j\_m, t.h\_m, t.k\_m, t.jdate From gcssourcextwomass\_psc as x, twomass..twomass\_psc as t Where x.slaveobjid=t.pts\_key And distancemins\
In (Select Min(distancemins) From gcssourcextwomass\_psc Where\
masterobjid=x.masterobjid)) As T2 Right Outer Join gcssource\
As g On (g.sourceid=T2.masterobjid) Where (g.ra Between 235.0 And 245.0) And (g.dec Between -30.0 And -27.0) And zapermag3 > 14.0 And yapermag3 > 11.5 And japermag3 > 12.0 And hapermag3 > 10.0 And k\_1apermag3 > 9.5 And zxi Between -1.0 And +1.0 And yxi Between -1.0 And +1.0 And jxi Between -1.0 And +1.0 And hxi Between -1.0 And +1.0 And k\_1xi Between -1.0 And +1.0 And zeta Between -1.0 And +1.0 And yeta Between -1.0 And +1.0 And jeta Between -1.0 And +1.0 And heta Between -1.0 And +1.0 And k\_1eta Between -1.0 And +1.0 And zclass Between -2 And -1 And yclass Between -2 And -1 And jclass Between -2 And -1 And hclass Between -2 And -1 And k\_1class Between -2 And -1 And (priorsec = 0 Or priorsec = g.framesetid) And g.framesetid=I.framesetid And I.jmfid=mj.multiframeid
\[lastpage\]
[^1]: E-mail: dawsonp@tcd.ie (PD); aleks@cp.dias.ie (AS); tr@cp.dias.ie (TR)
|
**[$\epsilon$]{}-CONVERTIBILITY OF ENTANGLED STATES AND EXTENSION**
**OF SCHMIDT RANK IN INFINITE-DIMENSIONAL SYSTEMS**
Masaki Owari
*Collaborative Institute for Nano Quantum Information Electronics, The University of Tokyo[^1]*
*Tokyo 113-0033, Japan*
Samuel L. Braunstein
*Computer Science, University of York*
*York YO10 5DD, United Kingdom*
Kae Nemoto
*National Institute of Informatics*
*Tokyo 101-8430, Japan*
Mio Murao
*Department of Physics, The University of Tokyo*
*Tokyo 113-0033, Japan*
*PRESTO, JST*
*Kawaguchi, Saitama 332-0012, Japan*
Introduction
============
Entanglement is one of the central topics in quantum information, and has both physical and information scientific aspects. In particular, entanglement involves quantum non-local correlations which have been of interest in physics [@epr], and also acts as a resource of quantum information processing for informatics [@resource; @quantum; @communication]. Thus, the characterization of the entanglement of physical systems is important from both a physical and information scientific viewpoint.
Mathematically, physical systems can be categorized into two classes, that is, finite-dimensional systems which can be treated in the framework of conventional linear algebra, and infinite-dimensional systems which need to be treated in the framework of functional analysis [@neumann; @functional; @analysis]. It is therefore worthwhile to know whether such a mathematical difference of systems can make an essential difference in the properties of entanglement in these systems. Indeed, this may provide an answer to the question of what is the essential difference between the physics of finite-dimensional systems and the physics of infinite-dimensional systems from the viewpoint of non-local correlations. Moreover, from the information-theoretic viewpoint, if such a difference exists, there may be the possibility that we can achieve an information processing in infinite-dimensional systems which cannot be achieved in finite-dimensional systems.
In this paper, we mainly focus on seeking a difference between the properties of entanglement of finite-dimensional systems and those of infinite-dimensional systems. Since much work on the characterization of bipartite entangled states has been done in finite-dimensional systems [@bennett; @majorization; @vidal; @monotone], we concentrate our efforts on the characterization of bipartite entangled states in infinite-dimensional systems, and try to find a difference in the properties of their entanglement.
So far, research on the characterization of entanglement in infinite dimensions has been done in the form of separability criteria [@separability], Gaussian LOCC convertibility [@gaussian; @gaussian; @distill], and entanglement measures [@entanglement; @measure]. The separability criteria gives us a way of judging whether or not a given state is entangled. The Gaussian LOCC convertibility gives the detailed structure of the strength of entanglement of Gaussian states. Entanglement measures give an approximated strength of entanglement in the limit of an asymptotic infinite number of copies.
The above research mainly is concerned with Gaussian states and Gaussian operations, and unique properties of infinite-dimensional entanglement do not appear clearly in this regime. Therefore, in order to find a property unique to infinite-dimensional entanglement it is important to investigate the strength of entanglement more precisely for a broader class of states and operations.
The strength of entanglement is defined by means of the convertibility between entangled states under local operations, *e.g.*, local operations and classical communication (LOCC), stochastic-LOCC (SLOCC), or the positive partial transpose (PPT) operation [@LOCC; @uniqueness]. Among such local operations we mainly focus in this paper on SLOCC, and investigate the SLOCC convertibility of entangled states in infinite-dimensional systems without any assumption for states or operations to find a unique property of entanglement in infinite-dimensional systems.
When we consider SLOCC convertibility for infinite-dimensional systems, there are at least two difficulties, namely, the problem of continuity and the problem of a potentially infinite cost for classical communication. In order to avoid such difficulties, we propose a new definition of state convertibility that we call $\epsilon$-convertibility. We define $\epsilon$-convertibility as the convertibility of states in an approximated setting by means of the trace norm. Then, within the framework of $\epsilon$-convertibility, we investigate SLOCC convertibility in infinite-dimensional systems and show a fundamental difference of SLOCC convertibility between infinite and finite-dimensional systems.
The paper is organized as follows: in section \[infinite Nielsen\], we define the $\epsilon$-convertibility of LOCC and SLOCC, and show how to avoid the problems of discontinuity and infinitely-costly classical communication. Then within the framework of $\epsilon$-convertibility, we give the infinite-dimensional extensions and proofs of Nielsen’s and Vidal’s theorem, which give the necessary and sufficient conditions of LOCC and SLOCC convertibility, respectively. In section \[Extension\], first we define monotones (monotonic functions) of SLOCC convertibility, which can be considered an extension of the Schmidt rank for infinite-dimensional systems, then by means of this monotone, we investigate the SLOCC convertibility for infinite-dimensional systems. We show that the cardinal number of the quotient set of states by SLOCC convertibility is greater than or equal to the cardinal number of the continuum, and also show that however many (finite) copies of exponentially-damped states (states with exponentially damped Schmidt coefficients) there are, they cannot be converted into even a single copy of a polynomially-damped state (a state with polynomially-damped Schmidt coefficients). Such properties do not exist in finite-dimensional systems and are actually unique to infinite-dimensional systems.
$\epsilon$-convertibility {#infinite Nielsen}
=========================
In this paper we consider the bipartite infinite-dimensional system ${\mathcal{H}}= {\mathcal{H}}_A \otimes {\mathcal{H}}_B$ where $\dim {\mathcal{H}}_A = \dim {\mathcal{H}}_B = \infty$ and we shall assume that ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$ are separable. By ${\mathfrak{B}}({\mathcal{H}})$ we denote the Banach space of all bounded operators on ${\mathcal{H}}$. If we use the term LOCC, we will always assume that operations succeed *with unit probability* [@uniqueness]. On the other hand we use the term SLOCC in the case where operations work with a finite probability less than unity. For simplicity we use at most countably infinite POVMs as the element of an LOCC (or SLOCC), $\{A_i\}_{i=1}^{\infty}$, $A_i\in{\mathfrak{B}}({\mathcal{H}})$, $\sum_{i\in \mathbb{N}}A ^{\dagger}A =
(\textbf{or} \leq )I$ (corresponding to ultra-weak convergence).
$\epsilon$-convertibility for LOCC and SLOCC {#epsilon}
--------------------------------------------
As mentioned in the introduction to give a detailed discussion of SLOCC convertibility in infinite-dimensional systems, there are at least two difficulties, namely, discontinuity and infinite classical communication costs. In this subsection we define $\epsilon$-convertibility and see that it allows us to avoid the difficulty of discontinuity. The other difficulty is addressed in the following subsection.
In infinite-dimensional systems we cannot deny the possibility that ${{\left \vert}\Psi {\right \rangle}}$ is SLOCC convertible to any neighborhood of ${{\left \vert}\Phi {\right \rangle}}$ (in terms of strong, or weak topology), but not to ${{\left \vert}\Phi {\right \rangle}}$ itself. To avoid such a discontinuity, when considering convertibility among genuinely infinite-dimensional states (i.e., states with infinitely many non-zero Schmidt coefficients), we shall identify these neighborhoods with the state itself. To achieve this we shall extend the definition of LOCC and SLOCC convertibility to satisfy the above requirement. Mathematically, we redefine LOCC convertibility as follows: ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi {\right \rangle}}$ by LOCC, if and only if for any neighborhood of ${{\left \vert}\Phi {\right \rangle}}$, there exists an LOCC operation by which ${{\left \vert}\Psi {\right \rangle}}$ is transformed to a state in the neighborhood of ${{\left \vert}\Phi {\right \rangle}}$. We call this new definition of convertibility $\epsilon$-convertibility. Below we rigorously define $\epsilon$-convertibility for LOCC, then we show that this definition recovers the continuity of convertible probability at least with some suitable weak meaning.
Before we give the definition of $\epsilon$-convertibility, we need to choose a topology of the convergence which we use in our definition. Actually, it is well known that there are many different topologies defined by associated norms in infinite-dimensional systems. Therefore, we need to take care to choose our ‘distance’. Since we introduced $\epsilon$-convertibility because of the fundamental impossibility for discriminating a state ${{\left \vert}\Phi {\right \rangle}}$ from states within infinitely small neighborhoods of ${{\left \vert}\Phi {\right \rangle}}$, the distance we consider needs to echo this difficulty with discrimination. We can easily see that the trace norm possesses such a property as follows. Suppose $M$ is an arbitrary POVM element and $\lim
_{n\rightarrow\infty}\|\rho-\rho_n\|_{\rm tr}=0$. Then, $\lim _{n
\rightarrow \infty} | {\mathrm{tr}}\; \rho M - {\mathrm{tr}}\; \rho _n M| \le \lim _{n
\rightarrow \infty} \| \rho - \rho _n \| _{\rm tr}\|M\|_{\rm op} =
0$, where $\| \cdot \| _{\rm op}$ is the operator norm. Thus, for all measurements the resulting probability distributions for $\rho_n$ converge to the resulting probability distribution for $\rho$. That is, if $\rho_n$ converges $\rho$ in the trace norm, there is no way to discriminate $\rho$ from $\rho _n$ for sufficiently large $n$. But this is just the property required of the distance needed to deal with the discontinuity difficulty in the definition of $\epsilon$-convertibility. Therefore, we shall use the trace norm as our distance measure for $\epsilon$-convertibility.
Following this discussion we rigorously define $\epsilon$-convertibility for LOCC as:
We say that ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ by LOCC, if for any $\epsilon > 0$, there exists an LOCC operation $\Lambda$ which satisfies the condition $\|\Lambda({{\left \vert}\Psi {\right \rangle}}{{\left \langle}\Psi {\right \vert}})-{{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}}\|_{\rm
tr}<\epsilon$ where $\|\cdot\|_{\rm tr}$ is the trace norm.
Similarly, we define $\epsilon$-convertibility for SLOCC as:
\[epsilon SLOCC\] We say that ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC with probability $p > 0$ if for any $\epsilon
>0$, there exists an SLOCC operation $\Lambda $ which satisfies the following condition, $\| \Lambda ( {{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} )/{\mathrm{tr}}\; \Lambda (
{{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} ) - {{\left \vert}\Phi {\right \rangle}} {{\left \langle}\Phi {\right \vert}} \| _{\rm tr} <
\epsilon$ and ${\mathrm{tr}}\; \Lambda ( {{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} ) \ge p$.
This definition of $\epsilon$-convertibility under SLOCC means that [*with more than some fixed non-zero probability $p$*]{}, ${{\left \vert}\Psi {\right \rangle}}$ can be converted to any neighborhood of ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC.
Here, we prove by means of $\epsilon$-convertibility that we can recover enough continuity to achieve a classification of states by SLOCC convertibility. In infinite-dimensional systems when we consider SLOCC convertibility it might happen that ${{\left \vert}\Psi {\right \rangle}}$ cannot be converted to ${{\left \vert}\Phi {\right \rangle}}$ and yet there exists a sequence of ${{\left \vert}\Phi _n {\right \rangle}}$ such that ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi _n {\right \rangle}}$ with probability $p_n$ and $\lim _{n
\rightarrow \infty} p_n > 0$; however, we cannot discriminate ${{\left \vert}\Phi {\right \rangle}}$ and ${{\left \vert}\Phi _n {\right \rangle}}$ for large $n$. If this were to happen then it would be nonsense that ${{\left \vert}\Psi {\right \rangle}}$ could not converted to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC. However, by means of our new definition of convertibility, we avoid such a discontinuity. That is, we can easily show the following continuity property of $\epsilon$-convertibility.
\[epsilon SLOCC lemma\] If ${{\left \vert}\Psi {\right \rangle}}$ is not $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC, but ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi
_n {\right \rangle}}$ by SLOCC with probability $p_n$ for all $n$, where $\lim _{n
\rightarrow \infty} {{\left \vert}\Phi _n {\right \rangle}} {{\left \langle}\Phi _n {\right \vert}} = {{\left \vert}\Phi {\right \rangle}}
{{\left \langle}\Phi {\right \vert}}$ by the trace norm, then $\lim _{n \rightarrow \infty}
p_n = 0$.
We prove this lemma by contradiction. We assume the following condition; ${{\left \vert}\Psi {\right \rangle}}$ is not $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$, ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi _n {\right \rangle}}$ with probability $p_n > 0$, where $\lim _{n \rightarrow \infty}
{{\left \vert}\Phi _n {\right \rangle}} = {{\left \vert}\Phi {\right \rangle}}$. Moreover, if we add one condition; $\limsup _{n \rightarrow \infty} p_n >0$, then we can show the contradiction as follows.
Since $\limsup _{n \rightarrow \infty} p_n >0$, there exists a subsequence of $p_{n}$, such that $\lim p_{n(k)} = p >0$ and $p_{n(k)} >p/2$ for all $k \in \mathbb{N}$. Then, since ${{\left \vert}\Psi
{\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi _n {\right \rangle}}$ with probability $p_n$, for any $\epsilon >0$ and for any $k \in \mathbb{N}$, there exists an SLOCC operation $\Lambda _{\epsilon, n(k)}$ such that $\| \Lambda_{\epsilon, n(k)}({{\left \vert}\Psi {\right \rangle}}{{\left \langle}\Psi {\right \vert}})/{\mathrm{tr}}\Lambda_{\epsilon, n(k)}({{\left \vert}\Psi {\right \rangle}}{{\left \langle}\Psi {\right \vert}}) - {{\left \vert}\Phi
_{n(k)} {\right \rangle}}{{\left \langle}\Phi _{n(k)} {\right \vert}} \| < \epsilon$ and ${\mathrm{tr}}\Lambda_{\epsilon, n(k)} ({{\left \vert}\Psi {\right \rangle}}{{\left \langle}\Psi {\right \vert}}) \ge p_{n(k)} >
p/2$. Moreover, since $\lim _{n \rightarrow \infty}{{\left \vert}\Phi _n {\right \rangle}}
={{\left \vert}\Phi {\right \rangle}}$, for any $\epsilon >0$, there exists an $N_{\epsilon}
\in \mathbb{N}$ such that for any $n \ge N_{\epsilon}$, $\|
{{\left \vert}\Phi _n {\right \rangle}}{{\left \langle}\Phi _n {\right \vert}} - {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}} \| < \epsilon$. Therefore, for any $2 \epsilon >0$, by choosing $k \in \mathbb{N}$ as $n(k) \ge N_{\epsilon}$, $$\begin{aligned}
&\quad & \| \Lambda _{\epsilon, n(k)}/{\mathrm{tr}}\Lambda_{\epsilon, n(k)}
-
{{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}} \| \\
&\le & \| \Lambda_{\epsilon, n(k)}/{\mathrm{tr}}\Lambda_{\epsilon, n(k)} -
{{\left \vert}\Phi _{n(k)} {\right \rangle}}{{\left \langle}\Phi _{n(k)} {\right \vert}} \| + \| {{\left \vert}\Phi
_{n(k)} {\right \rangle}}{{\left \langle}\Phi _{n(k)} {\right \vert}} - {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}}\| \\
&\le & 2 \epsilon.\end{aligned}$$ Moreover, ${\mathrm{tr}}\Lambda_{\epsilon, n(k)}({{\left \vert}\Psi {\right \rangle}}{{\left \langle}\Psi {\right \vert}}) \ge
p_{n(k)} >p/2$. Therefore, ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ with probability $p/2$. This is a contradiction. Therefore, if ${{\left \vert}\Psi {\right \rangle}}$ is not $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}} $, and if ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi _n {\right \rangle}}$ with probability $p_n$ where $\lim _{n \rightarrow
\infty} {{\left \vert}\Phi_n {\right \rangle}} = {{\left \vert}\Phi {\right \rangle}} $, then, $\lim _{n \rightarrow
\infty} p_n = 0$. $square$
This lemma means that if ${{\left \vert}\Psi {\right \rangle}}$ cannot be converted to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC, then ${{\left \vert}\Psi {\right \rangle}}$ is also almost certainly inconvertible to states near to ${{\left \vert}\Phi {\right \rangle}}$. Therefore, our definition of $\epsilon$-convertibility preserves continuity of the theory (at least sufficiently for the purposes of the classification of states), and we can avoid the discontinuity difficulty mentioned above.
Nielsen’s and Vidal’s theorems for infinite-dimensional systems {#nielsen vidal}
---------------------------------------------------------------
In this subsection we reconstruct Nielsen’s and Vidal’s theorems for infinite-dimensional systems by means of $\epsilon$-convertibility. As a result, we will see that we can also avoid the difficulty of a potentially infinite cost for classical communication by our convertibility, that is, only a finite amount of classical communication is actually necessary for our theory of convertibility. As is well known, Nielsen’s and Vidal’s theorems give the necessary and sufficient conditions of LOCC and SLOCC, respectively. Therefore, by proving these theorems rigorously we may obtain a firm foundation for the analysis of SLOCC convertibility for infinite-dimensional systems, which we shall consider in the next section. Since Vidal’s theorem is a generalization of Nielsen’s theorem, we shall first discuss Nielsen’s theorem and then go on to consider Vidal’s theorem.
In finite-dimensional systems Nielsen’s theorem gives the necessary and sufficient conditions for LOCC convertibility between a pair of bipartite pure states ${{\left \vert}\Phi {\right \rangle}}$ and ${{\left \vert}\Psi {\right \rangle}}$ as follows $$\label{nielsenfinite}
{{\left \vert}\Psi {\right \rangle}} \rightarrow {{\left \vert}\Phi {\right \rangle}}
~~ \Leftrightarrow ~~
\mathbf{\lambda} \prec \mathbf{\mu}\;,$$ where the arrow $\rightarrow$ represents convertibility under LOCC, and $\mathbf{\lambda}$ and $\mathbf{\mu}$ represent sequences of Schmidt coefficients (in descending order) of the states ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$, respectively, and $\prec$ denotes majorization of the sequences [@majorization] (if $\mathbf{\lambda} \prec \mathbf{\mu}$, we say “$\mathbf{\lambda}$ is majorized by $\mathbf{\mu}$”). In infinite-dimensional systems we can show that Eq. (\[nielsenfinite\]) is still valid where we replace the meaning of $\rightarrow$ by $\epsilon$-convertibility under LOCC. Nielsen’s theorem then takes the following form for infinite-dimensional systems:
\[epsilon Nielsen\] ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$, if and only if $\lambda \prec \mu $, where $\prec$ means majorization in infinite-dimensional systems (see Appendix \[Majorization\]), and $\mathbf{\lambda}$ and $\mathbf{\mu}$ are the Schmidt coefficients of ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$, respectively.
Since the proof of theorem \[epsilon Nielsen\] is long, we have placed the rigorous proof of this theorem in Appendix \[Proof of Nielsen\]. Below we only give a sketch of the proof:
**Sketch of Proof**
1\) The necessary part: We can directly extend the proof of necessity of the original theorem to infinite-dimensional systems. The necessary condition part of the original theorem is constructed using the Lo-Popescu theorem (Theorem \[Lo-Popescu\]) [@lo-popescu] and Uhlmann’s theorem (Theorem \[Uhlmann\]). Since these two theorems can themselves be extended to infinite-dimensional systems (see Appendix A and B). The same proof for finite-dimensional systems still holds in infinite-dimensional systems.
2\) The sufficient part: In the proof of sufficiency, our definition of $\epsilon$-convertibility plays a crucial role in extending the proof of Nielsen’s theorem. Our proof is based on the proof for finite-dimensional systems in Ref. [@uniqueness] and is extended to genuine infinite-dimensional states by means of $\epsilon$-convertibility. We can show that for any $N$, there exists a state $ {{\left \vert}\Phi' {\right \rangle}}$ (which depends on $N$) such that its first $N$ Schmidt coefficients are equal to the Schmidt coefficients of ${{\left \vert}\Phi {\right \rangle}}$ and where the Schmidt coefficients of ${{\left \vert}\Psi {\right \rangle}}$ are majorized by the Schmidt coefficients of ${{\left \vert}\Phi' {\right \rangle}}$. Therefore, for every neighborhood of ${{\left \vert}\Phi {\right \rangle}}$, we can always find a state to which ${{\left \vert}\Psi {\right \rangle}}$ can be converted under LOCC. $\square$
By means of Nielsen’s theorem in infinite-dimensional systems we can extend Vidal’s theorem for SLOCC convertibility [@vidal], which gives the necessary and sufficient condition of SLOCC convertibility with a probability $p$ to infinite-dimensional systems using $\epsilon$-convertibility. Vidal’s theorem states that a bipartite pure state ${{\left \vert}\Psi {\right \rangle}}$ can be converted to another bipartite pure state ${{\left \vert}\Phi {\right \rangle}}$ under SLOCC with probability at least $p$ if and only if $\lambda\prec ^{\omega} p
\mu$ \[here $\prec ^{\omega}$ denotes super-majorization and is defined in Appendix A1, Eq. (\[super-majorization\]) of Definition 4\]. The generalization of Vidal’s theorem for $\epsilon$-convertibility can then be written:
\[epsilon Vidal\] ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC with probability $p$, if and only if $\lambda \prec ^{\omega} p
\mu$ are satisfied where $\lambda$ and $\mu$ are the Schmidt coefficients of ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ respectively.
The proof of this theorem is in appendix \[Proof of Vidal\].
Therefore, the extension of Vidal’s theorem also applies to $\epsilon$-convertibility.
Although infinite amounts of classical information do not exist in the real world, an infinite amount of classical communication is necessary to convert one genuine infinite-dimensional state to another by LOCC and SLOCC in the conventional theory of convertibility. From the proof of Theorem \[epsilon Nielsen\], we can show that we can avoid such infinite costs of classical communication in LOCC convertibility by our new definition of $\epsilon$-convertibility. In the proof of this theorem, we showed that there exists a natural number $M$ such that ${{\left \vert}\Phi' {\right \rangle}}$ satisfies the condition $\mu'_N = \lambda _N$ for $N \ge M$. Therefore, the LOCC operation by which ${{\left \vert}\Psi {\right \rangle}}$ can be converted into ${{\left \vert}\Phi' {\right \rangle}}$ is actually an LOCC operation requiring only a finite amount of classical communication. We can also show a similar result for SLOCC convertibility. By the proof of Theorem \[epsilon Vidal\] it is easily seen that we can construct the protocol of SLOCC with only a finite amount of classical communication in a manner similar to LOCC. As a result, in our definition of $\epsilon$-convertibility of LOCC and SLOCC, we can convert states with any finite accuracy by only a finite amount of communication, and only when this error goes to zero does the amount of classical communication go to infinity. Therefore, our definition of $\epsilon$-convertibility yields a theory of single-copy LOCC and SLOCC convertibility requiring only a finite amount of classical communication even in the infinite-dimensional setting.
Here we need to add two final remarks about our framework of $\epsilon$-convertibility. From the proofs of Theorems \[epsilon Nielsen\] and \[epsilon Vidal\], we can derive another interpretation of $\epsilon$-convertibility. First, in the case of LOCC, that is, Nielsen’s theorem, since the state ${{\left \vert}\Phi ' {\right \rangle}} = \sum _{k=1}^{\infty} \sqrt{\mu '} {{\left \vert}i {\right \rangle}} \otimes
{{\left \vert}i {\right \rangle}}$ is also majorized by ${{\left \vert}\Phi {\right \rangle}}$ in the proof of Theorem \[epsilon Nielsen\] (Appendix \[Proof of Nielsen\]), we can immediately see the following fact: If $\lambda \prec \mu $ where $\lambda$ and $\mu$ are the Schmidt coefficients of ${{\left \vert}\Phi {\right \rangle}}$ and ${{\left \vert}\Psi {\right \rangle}}$ respectively, then there exists a sequence of LOCC $\{ \Lambda _n \} _{n=1}^{\infty}$ such that for all $n \in
\mathbb{N}$, $\Lambda_n \cdots \Lambda _1 ({{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}}
)$ has the same Schmidt basis and their Schmidt coefficients are majorized by those of ${{\left \vert}\Phi {\right \rangle}}$ and they also satisfy $\lim _{n
\rightarrow \infty} \Lambda _n \cdots \Lambda _1 ({{\left \vert}\Psi {\right \rangle}}
{{\left \langle}\Psi {\right \vert}} ) = {{\left \vert}\Phi {\right \rangle}} {{\left \langle}\Phi {\right \vert}}$. Thus, we can interpret the above sequence of LOCC as the LOCC with an infinite number of steps of classical communication. Second, since in the proof of the above theorem in appendix \[Proof of Vidal\], we constructed a sequence of SLOCC $\{ \Lambda _n \}_{n=1}^{\infty}$ such that $\lim _{n \rightarrow \infty} \Lambda _n \cdots \Lambda _1
({{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} )/{\mathrm{tr}}\; \Lambda _n \cdots \Lambda _1
({{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} ) = {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}}$, we can consider that Vidal’s theorem is also naturally extended to infinite-dimensional systems by the redefinition of LOCC convertibility including an infinite number of steps of classical communication. Therefore, we can also say that both the Nielsen and Vidal theorems can be extended to infinite-dimensional systems if we allow for an infinite number of steps of LOCC.
In this section, we proposed a new definition of convertibility, [*$\epsilon$-convertibility*]{}, to treat entanglement convertibility between genuine infinite-dimensional states. This redefinition is suitable from both the technical and realistic viewpoints, that is, to avoid the difficulties of both discontinuity and infinite cost in classical communication for infinite-dimensional systems. Then, by means of $\epsilon$-convertibility we proved the Nielsen and Vidal theorems which are the fundamental theorems of LOCC and SLOCC convertibility in infinite-dimensional systems. As a result under our change of definition the framework of entanglement convertibility is preserved in the context of infinite-dimensional systems, and therefore our definition of $\epsilon$-convertibility for LOCC is suitable and sufficient for realistic conditions of quantum information processing in infinite-dimensional systems.
Extension of Schmidt rank {#Extension}
=========================
Definition and its basic property {#definition}
---------------------------------
In this section we discuss the SLOCC convertibility of infinite-dimensional systems and show that there are many important differences between the structure of the SLOCC classification of genuinely infinite-dimensional states and that of finite-dimensional states. For this purpose in this subsection we first define a pair of new SLOCC monotones which can be considered as extensions of the Schmidt rank, and then we analyze their properties. Finally, we show that there are continuously many classes of states under SLOCC convertibility in infinite-dimensional systems. In the following we always consider SLOCC convertibility in the meaning of $\epsilon$-convertibility defined above. Therefore, we henceforth omit the qualifier ‘$\epsilon$’.
To study convertibility in detail, monotones of convertibility are crucially important. In finite systems, the Schmidt rank (the rank of the reduced density matrix) gives the necessary and sufficient condition of SLOCC convertibility. On the other hand, in infinite-dimensional systems, since almost all states have infinite Schmidt rank, the classification by Schmidt rank is not useful. Therefore, proposals for new SLOCC monotones for genuine infinite-dimensional entangled states are essential for analysis of SLOCC convertibility between genuinely infinite-dimensional states. In the followings, we give a definition of a pair of new SLOCC monotones $R^-$ and $R^+$, which can be considered an extension of the Schmidt rank for infinite-dimensional systems.
Since the usual Schmidt rank represents how quickly Schmidt coefficients vanish, when we consider their extension to genuine infinite-dimensional states it is natural to define the extension of this concept as a function which represents how quickly a sequence of Schmidt coefficients converge to zero. In Vidal’s theorem Schmidt coefficients always appear in the form of a sum from $n$ to $\infty$ which is an LOCC monotone for all $n \in
\mathbb{N}$ and is called “[*Vidal’s monotone*]{}” [@monotone]. Therefore, rather than studying the direct convergence of Schmidt coefficients $\{ \lambda _n
\}_{n=1}^{\infty}$ we shall study the convergence of Vidal’s monotones $\{ \sum _{i=n}^{\infty} \lambda _i \}_{n=1}^{\infty}$. To measure the speed of convergence of Vidal’s monotone we compare a sequence of Vidal’s monotones with some real parameterized class of sequences. Thus, we define the new monotones as follows:
\[def of monotone\] For a parameterized class of sequences $\{ f_r (n) \} _{r \in
(a,b)}$ which satisfy the following conditions:
$\forall r \in (a,b)$, $\lim _{n \rightarrow \infty} f_r(n) =0$
$\forall r \in (a,b)$, $n_1 < n_2 \Rightarrow f_r(n_1) > f_r(n_2) > 0$
$\forall n \in \mathbb{N}$, $r_1 < r_2 \Rightarrow \lim _{n \rightarrow \infty}
\frac{f_{r_1} (n) }{f_{r_2} (n) } =0$,
where $n \in \mathbb{N}$, $0 \le a < b \le \infty$, we define a pair of functions $R^+_{f_r} ({{\left \vert}\Psi {\right \rangle}} )$ and $R^-_{f_r}
({{\left \vert}\Psi {\right \rangle}})$ by $$\begin{aligned}
R^+ _{f_r}({{\left \vert}\Psi {\right \rangle}}) &=& \inf \{ r \in (a,b) | \lim _{n
\rightarrow \infty}
\frac{\sum _{i=n}^{\infty} \lambda _i}{f_r(n)} = 0 \} \\
R^- _{f_r}({{\left \vert}\Psi {\right \rangle}}) &=& \inf \{ r \in (a,b) | \underline{\lim}
_{n \rightarrow \infty}
\frac{\sum _{i=n}^{\infty} \lambda _i}{f_r(n)} = 0 \}\;.\end{aligned}$$ If for all $r \in (a,b) $, $\overline{\lim} _{n \rightarrow
\infty} {\sum _{i=n}^{\infty} \lambda _i}/{f_r(n)} > 0$, then we define $R^+_{f_r} ({{\left \vert}\Psi {\right \rangle}}) =b$. Here, we use the notation of $\overline{\lim} = \limsup$ and $\underline{\lim} = \liminf$.
For the definition of $R^+_{f_r}({{\left \vert}\Psi {\right \rangle}})$, we could also have defined this function as $$R^+ _{f_r}({{\left \vert}\Psi {\right \rangle}}) = \inf \{ r \in (a,b) | \overline{\lim} _{n
\rightarrow \infty}
\frac{\sum _{i=n}^{\infty} \lambda _i}{f_r(n)} = 0 \}\;,$$ however this definition is the same as the previous definition, since $\overline{\lim} _{n \rightarrow \infty} {\sum _{i=n}^{\infty}
\lambda _i}/{f_r(n)} = 0$ guarantees $\lim_{n \rightarrow \infty}
{\sum _{i=n}^{\infty} \lambda _i}/{f_r(n)} = 0$. Note that the limits $\overline{\lim} _{n \rightarrow \infty} \sum _{i=n}^{\infty}
\lambda _i / f_r(n)$ and $\underline{\lim} _{n \rightarrow \infty}
\sum _{i=n}^{\infty} \lambda _i /f_r(n)$ do not generally coincide. Thus, to measure the speed of the convergence, we need two functions $R^+_{f_r}({{\left \vert}\Psi {\right \rangle}})$ and $R^-_{f_r}({{\left \vert}\Psi {\right \rangle}})$ corresponding to these different approaches of the limit as given above. By their definition, we can easily see that $R^+ _{f_r}$ and $R^- _{f_r}$ satisfy $R^- _{f_r} ({{\left \vert}\Psi {\right \rangle}}) \le R^+ _{f_r}
({{\left \vert}\Psi {\right \rangle}})$ for all ${{\left \vert}\Psi {\right \rangle}}$. As we might expect, both $R^+
_{f_r}$ and $R^- _{f_r}$ are SLOCC monotones, and moreover, the sufficient condition of monotonicity is also partially valid as given by the statement of the following theorem. (Note, that when the choice of $f_r(x)$ is clear we write simply $R^+$ and $R^-$ for the monotones.)
\[theorem of monotone\] For all $f_r$ which satisfy the condition in Definition \[def of monotone\],
1. If ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC then $R^+_{f_r} ({{\left \vert}\Psi {\right \rangle}}) \ge R^+_{f_r} ({{\left \vert}\Phi {\right \rangle}})$ and $R^-_{f_r}
({{\left \vert}\Psi {\right \rangle}}) \ge R^-_{f_r} ({{\left \vert}\Phi {\right \rangle}})$.
2. If $R^+_{f_r}({{\left \vert}\Phi {\right \rangle}}) < R^-_{f_r}({{\left \vert}\Psi {\right \rangle}})$, then ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC.
Proof of part 1:\
We only prove this for the case of $R^+ _{f_r}$ since the proof for $R^- _{f_r}$ is identical. Suppose $R^+ _{f_r} ({{\left \vert}\Phi {\right \rangle}}) >
R^+ _{f_r} ({{\left \vert}\Psi {\right \rangle}})$ then for all $R^+ _{f_r} ({{\left \vert}\Psi {\right \rangle}}) < r
< R^+ _{f_r} ({{\left \vert}\Phi {\right \rangle}})$, $\overline{\lim} _{n \rightarrow
\infty} \sum _{i=n}^{\infty} \lambda _i/f_r (n) = 0$ and $\overline{\lim} _{n \rightarrow \infty} \sum _{i=n}^{\infty} \mu
_i / f_r (n) > 0$, where $\{ \lambda _i \}_{i=0}^{\infty}$ and $\{
\mu _i \}_{i=0}^{\infty}$ are Schmidt coefficients of ${{\left \vert}\Psi
{\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$, respectively. Thus, for all $\delta
> 0$ there exists an $N_0 (\delta)$ such that if $n > N _0 (\delta
)$, then $\sum _{i=n}^{\infty} \lambda _i / f_r(n) < \delta $. Suppose $a \stackrel{def}{=} \overline{\lim} _{n \rightarrow
\infty} \sum _{i=n}^{\infty} \mu _i / f_r(n) >0$, then there exists a partial sequence of $\sum _{i=n}^{\infty} \mu _i /
f_r(n)$, say $\sum _{i=k(n)}^{\infty} \mu _i / f_r(k(n))$, such that $\lim _{n \rightarrow \infty} \sum _{i=k(n)}^{\infty} \mu _i
/ f_r(k(n)) = a
>0$. Then there exists an $N_1 \in \mathbb{N}$ such that for all $n > N_1$, $\sum _{i=k(n)}^{\infty} \mu _i / f_r(k(n)) > a/2$. Therefore if we define $N_2(\delta)$ as $N_2(\delta) = \max (N_1, \min \{n \in
N |k(n) \ge N_0 \})$, then for all $n > N_2(\delta)$, $\sum
_{i=k(n)}^{\infty} \lambda _i / f_r(k(n)) < \delta$ and $
f_r(k(n)) / \sum _{i=k(n)}^{\infty} \mu _i < 2/a$. That is, $\sum
_{i=k(n)}^{\infty} \lambda _i / \sum _{i=k(n)}^{\infty} \mu _i <
2\delta / a$. This means $\underline{\lim}_{n \rightarrow
\infty} \sum _{i=n}^{\infty} \lambda _i / \sum _{i=n}^{\infty} \mu
_i =0$, which means ${{\left \vert}\Psi {\right \rangle}}$ cannot be convertible to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC from Vidal’s Theorem.
Proof of part 2:\
If $R^+({{\left \vert}\Phi {\right \rangle}}) < R^-({{\left \vert}\Psi {\right \rangle}})$, then for all $R^+({{\left \vert}\Phi {\right \rangle}}) < r < R^-({{\left \vert}\Psi {\right \rangle}})$, $a \stackrel{def}{=}
\underline{\lim} _{n \rightarrow \infty} \sum _{i=n}^{\infty}
\lambda _i / f_r(n) >0$ and $\lim _{n \rightarrow \infty} \sum
_{i=n}^{\infty} \mu _i / f_r(n) =0$. Then, for all $\delta > 0$ there exists an $N_0(\delta)$ such that if $n > N_0(\delta)$, $\sum _{i=n}^{\infty} \lambda _i / \sum _{i=n}^{\infty} \mu _i
> a / 2\delta$. That is $\lim _{n \rightarrow \infty} \sum
_{i=n}^{\infty} \lambda _i / \sum _{i=n}^{\infty} \mu _i =
\infty$. From Vidal’s theorem, this means that ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi {\right \rangle}}$ by SLOCC. $\square$
Hence, this SLOCC monotone satisfies the sufficient condition of convertibility of SLOCC at least with the above meaning. As we shall see in the following sections by using $R^+_{f_r}({{\left \vert}\Psi {\right \rangle}})$ and $R^-_{f_r}({{\left \vert}\Psi {\right \rangle}})$ together, we can determine the classification of SLOCC convertibility better than in the case of using the other SLOCC monotones, although we also need both $R^+_{f_r}$ and $R^-_{f_r}$ to lead to the sufficient condition. Therefore, we can consider this pair of monotones as extensions of the Schmidt rank.
In the last part of this subsection we note one important fact which we can easily see from Theorem \[theorem of monotone\], that is that “[*in infinite dimensional systems there are at least continuously infinitely many different classes of SLOCC convertibility.*]{}” Since $R^+({{\left \vert}\Psi {\right \rangle}})$ (or $R^- ({{\left \vert}\Psi {\right \rangle}})$) is an SLOCC monotone whose range is a non-trivially connected set (interval) of real numbers, if ${{\left \vert}\Psi _r {\right \rangle}}$ satisfies $R^+({{\left \vert}\Psi _r {\right \rangle}}) =r$, each ${{\left \vert}\Psi _r {\right \rangle}}$ should belong to different classes of SLOCC convertibility for every different value of $r$. That is, there exists an injective map from a non-trivially connected set of real numbers to the quotient set of states by SLOCC. Therefore, in infinite-dimensional bipartite systems, the cardinal number of the quotient set of states by SLOCC convertibility is greater than or equal to the cardinal number of the [*continuum*]{}, (where the cardinal number of the continuum is equal to the cardinal number of an arbitrary interval of real numbers)[@kolmogorov]. Comparing this to the finite-dimensional systems case, where the cardinal number of the quotient set of states by SLOCC convertibility is equal to the dimension of the local systems. This fact is remarkable, that is, the cardinal number of such classes is actually larger than the local dimension (which is only [*countably infinite*]{}) in infinite-dimensional systems.
Examples of $R^+({{\left \vert}\Psi {\right \rangle}})$ and $R^-({{\left \vert}\Psi {\right \rangle}})$ {#exsample}
-------------------------------------------------------------------------------------------------------
In this subsection we construct important examples of the SLOCC monotones $R^+({{\left \vert}\Psi {\right \rangle}})$ and $R^-({{\left \vert}\Psi {\right \rangle}})$, and analyze SLOCC convertibility between some interesting classes of genuinely infinite-dimensional states. One is a class of states with polynomially-damped Schmidt coefficients; another is the class of two-mode squeezed states. Since these new monotones depend on a real parameterized family of sequences $\{ f_r(n) \}_{r \in
(a,b)}$, we need to choose this family suitably to analyze SLOCC convertibility among particular states. For this purpose it is convenient to derive the reference-states class $\{ {{\left \vert}\Psi _r {\right \rangle}}
\}_{r \in (a,b)}$ for particular $\{ f_r(n) \}_{r \in (a,b)}$, as the states which satisfy the condition $R^+({{\left \vert}\Psi _r {\right \rangle}}) =
R^-({{\left \vert}\Psi _r {\right \rangle}}) =r$. Therefore, at first we construct a way of finding the reference class ${{\left \vert}\Psi _r {\right \rangle}}$ from $f_r(n)$. The following corollary gives a method.
\[reference\] If $\{ f_r (n) \} _{r \in (a,b), n \in \mathbb{N}}$ satisfies following conditions:
1. $\forall r \in (a,b), n_1 \le n_2 \Rightarrow
f_r(n_1) > f_r(n_2)$ (monotonically decreasing) \[monotonically decreasing\]
2. $\forall r \in (a,b)$ and $\forall n \in \mathbb{N},
f_r(n) + f_r(n+2) \ge 2f_r(n+1)$ (convexity) \[convexity\]
3. $\forall m \in \mathbb{N}$, $r_1 \le r_2 \Leftrightarrow
\lim _{n \rightarrow \infty}
\frac{f_{r_1}(n)}{f_{r_2}(n+m)} =0$ (monotonicity), \[monotonisity\]
then, ${{\left \vert}\Psi _r {\right \rangle}} = \frac{1}{c_r} \sum _{n=1}^{\infty} \sqrt{-
f^{'}_r (n)} {{\left \vert}n {\right \rangle}} \otimes {{\left \vert}n {\right \rangle}}$, where $c_r= \sum _{n
=1}^{\infty} -f^{'}_r(n)$, satisfies $R^+({{\left \vert}\Psi _r {\right \rangle}}) =
R^-({{\left \vert}\Psi _r {\right \rangle}}) = r$ which are made from $\{ f_r (n) \} _{r \in
\mathbb{R}, n \in \mathbb{N}}$, and where $f_r'(x)$ denotes the derivative of $f_r(x)$ with respect to $x$.
From conditions \[monotonically decreasing\] and \[convexity\] above, there exists a class of doubly differentiable functions $\{
f_r(x) \} _{r \in (a,b), x \in \mathbb{R^+ }}$ which are an extension of the sequences $\{ f_r (n) \} _{r \in (a,b), n \in
\mathbb{N}}$ such that they satisfy $f^{'}_r(x) < 0$ and $f^{''}_r(x) \ge 0$. Therefore, a class of states $\{ {{\left \vert}\Psi
_r {\right \rangle}} \}_{r \in (a,b)}$ is well defined and their Schmidt coefficients are $\{ -f^{'}_r(n) /c_r \} _{n=1}^{\infty}$ in decreasing order. By definition then $$\begin{aligned}
\int _{n}^{\infty} \frac{-f^{'}_r(x)}{c_r} dx & \le & \sum
_{k=n}^{\infty} \frac{-f^{'}_r(n)}{c_r} \le \int _{n-1}^{\infty}
\frac{-f^{'}_r(x)}{c_r} dx
\nonumber \\
\frac{f_r(n)}{c_rf_{r_1}(n)} & \le & \sum _{k=n}^{\infty}
\frac{-f^{'}_r(n)}{c_rf_{r_1}(n)} \le
\frac{f_r(n-1)}{c_rf_{r_1}(n)}\;. \nonumber\end{aligned}$$ If $r<r_1$ then $$\lim _{n \rightarrow \infty} \sum _{k=n}^{\infty}
\frac{-f^{'}_r(n)}{c_rf_{r_1}(n)} \le \lim _{n \rightarrow \infty}
\frac{f_r(n-1)}{c_rf_{r_1}(n)} = 0\;,\nonumber$$ and if $r>r_2$ then $$\lim _{n \rightarrow \infty} \sum _{k=n}^{\infty}
\frac{-f^{'}_r(n)}{c_rf_{r_1}(n)} \ge \lim _{n \rightarrow \infty}
\frac{f_r(n)}{c_rf_{r_1}(n)} = +\infty \;.\nonumber$$ Thus, $R^+({{\left \vert}\Psi _r {\right \rangle}}) = R^-({{\left \vert}\Psi _r {\right \rangle}}) = r$. $\square$
This Corollary means that with the above three additional conditions for $f_r(n)$ we may always derive a class of reference states which correspond to each value of $R^-({{\left \vert}\Psi {\right \rangle}})$ and $R^+({{\left \vert}\Psi {\right \rangle}})$.
In what follows we construct examples of SLOCC monotones by means of the above Corollary and analyze two remarkable classes of states. One corresponds to the states which belong a higher class of SLOCC convertibility and the other to the well-known two-mode squeezed states.
As a first example consider $R^-({{\left \vert}\Psi {\right \rangle}})$ and $R^+({{\left \vert}\Psi {\right \rangle}})$ made from $\{ f_r(n) = n^{-(\frac{1}{r}-1)} \}
_{r \in (0,1)}$. By Corollary \[reference\], their class is $${{\left \vert}\Psi _r {\right \rangle}} = \frac{1}{\sqrt{\zeta (1/r)}} \sum _{n=1}^{\infty}
n^{-\frac{1}{2r}} {{\left \vert}n {\right \rangle}} \otimes {{\left \vert}n {\right \rangle}}\;,$$ where $\zeta (x)$ is the Riemann zeta function as a normalization factor. By definition, $R^-({{\left \vert}\Psi {\right \rangle}})$ and $R^+({{\left \vert}\Psi {\right \rangle}})$ represent how quickly the Schmidt coefficients of ${{\left \vert}\Psi {\right \rangle}}$ converge to $0$ as a polynomially-damped function. Thus, this function has a strictly positive value for states with polynomially-damped Schmidt coefficients. Similarly, for all states ${{\left \vert}\Psi {\right \rangle}}$ for which the Schmidt coefficients damp exponentially like two-mode squeezed states, we have $R^-({{\left \vert}\Psi {\right \rangle}}) = R^+({{\left \vert}\Psi {\right \rangle}}) =0$. Because the Schmidt coefficients can never be proportional to $1/n$ asymptotically in infinite-dimensional systems (since $\sum _{n=1}^{\infty} 1/n =
\infty$), for small $\epsilon > 0$, ${{\left \vert}\Psi _r {\right \rangle}}$ with $r=1-
\epsilon$ can be converted to almost any state. In the above sense we can say that they belong to a “higher rank” of entanglement class in terms of single-copy SLOCC. On the other hand, the above state with small $\epsilon$ is not of the “[*highest states*]{}”. That is, we can consider a class of states which belong to a higher rank of entanglement class than $\{ {{\left \vert}\Psi _r {\right \rangle}}
\}_{r \in (0,1)}$ as follows. For the states $${{\left \vert}\Psi _t {\right \rangle}} = \frac{1}{C_t} \sum_{n=1}^{\infty}
\frac{1}{\sqrt{x(\log x)^t}} {{\left \vert}n {\right \rangle}} \otimes {{\left \vert}n {\right \rangle}}\;,$$ with $t>0$, $R^-({{\left \vert}\Psi _t {\right \rangle}}) = R^+({{\left \vert}\Psi _t {\right \rangle}})=1$, and we can easily see that for all $t >0$, ${{\left \vert}\Psi _r {\right \rangle}}$ can not be converted to ${{\left \vert}\Psi _t {\right \rangle}}$ by SLOCC. In a similar manner, for all one-parameter classes of states we can always define a class of states which belong to a higher rank and can define a new pair of monotones from this class of states. Therefore, there does not exist a highest one-parameter class of states within the SLOCC classification.
As a next example consider $f_q(n) = e^{2n \log q} = q^{2n}$, $q
\in (0,1)$. In this case, the reference class is ${{\left \vert}\Psi _q {\right \rangle}}=
\frac{1}{c_q} \sum _{n=1}^{\infty} q^n {{\left \vert}n {\right \rangle}} \otimes {{\left \vert}n {\right \rangle}}$, that is the well-known two-mode squeezed states with $\frac{1}{2}\log \frac{1+q}{1-q}$ being the squeezing parameter. Therefore, $R^+_{f_q}({{\left \vert}\Psi {\right \rangle}})$ and $R^-_{f_q}({{\left \vert}\Psi {\right \rangle}})$ can be regarded as being analogs of squeezing parameters for any entangled states.
The above two examples also show that the classification of SLOCC is quite different from the classification by the amount of entanglement, that is, the classification of asymptotic (infinite-copy) LOCC in infinite-dimensional systems. In infinite-dimensional systems we often consider the class of two-mode squeezed states ${{\left \vert}\Psi_q {\right \rangle}}=\frac{1}{c_q}\sum_{n=1}^{\infty}q^n{{\left \vert}n {\right \rangle}}\otimes{{\left \vert}n {\right \rangle}}$ with $q = 1 - \epsilon$ instead of the maximally entangled states. Because $ \lim _{q \rightarrow 1} E({{\left \vert}\Psi _q {\right \rangle}}) = \infty$ this state converts to almost any state asymptotically by infinite-copy LOCC with unit probability. However by single-copy SLOCC with non-zero probability they cannot be converted to states ${{\left \vert}\Psi {\right \rangle}}$ with $R^+({{\left \vert}\Psi {\right \rangle}})>0$, where the monotone $R^+({{\left \vert}\Psi {\right \rangle}})$ is made from $f_r (n) = n^{-(\frac{1}{r}-1)}$. On the other hand, if we consider the class of states ${{\left \vert}\Psi
_r {\right \rangle}}= \sum _{n=1}^{\infty} n^{-\frac{1}{2r}} {{\left \vert}n {\right \rangle}} \otimes
{{\left \vert}n {\right \rangle}}/\sqrt{\zeta (1/r)}$ as we have already seen for small $\epsilon >0$, ${{\left \vert}\Psi _{1-\epsilon} {\right \rangle}}$ can be converted to almost any state by single-copy SLOCC with non-zero probability. Although the amount of entanglement for both $\{ {{\left \vert}\Psi _q {\right \rangle}} \}$ and $\{ {{\left \vert}\Psi _r {\right \rangle}} \}$ tend to infinity in the limit, $\{
{{\left \vert}\Psi _r {\right \rangle}} \}$ belongs to a higher class than $\{ {{\left \vert}\Psi _q {\right \rangle}}
\}$ in the single-copy scenario.
We add one final remark here: Although, we have only presented two examples for $f_r(n)$, there may be many other examples which are important in some situations. Generally speaking, for any given states, we can find a suitable function $f_r(n)$ for the analysis of the states. For example, if we deal with states whose Schmidt coefficients damp exponentially we can chose $f_r(n) = \exp
(n^{-1/r}), \exp (\exp (n^{-1/r})), $ etc, as the coefficients damp quickly enough to evaluate the monotones for the states.
Strong inhibition law {#inhibition law}
---------------------
So far we have emphasized the difference between states with exponentially-damped Schmidt coefficients and those with polynomially-damped coefficients and shown that exponentially-damped states cannot be converted into polynomially-damped states no matter how large their measure of entanglement is. Here we shall give one more fact which will demonstrate the remarkable difference between exponentially and polynomially-damped states. That is, “*However finitely many copies there are, exponentially-damped states cannot be converted into polynomially-damped states*.” This fact can be showed as the follows: Suppose ${{\left \vert}\Psi {\right \rangle}}$ is an exponentially-damped state and ${{\left \vert}\Phi {\right \rangle}}$ is a polynomially-damped one, then rigorously speaking, there exists a real number $r$ and a polynomial $p(n)$ which satisfy $\lim _{n
\rightarrow \infty} \frac{g_{{{\left \vert}\Psi {\right \rangle}}}(n)}{e^{-rn}}=0$ and $\underline{\lim}_{n\rightarrow \infty}
\frac{p(n)}{g_{{{\left \vert}\Phi {\right \rangle}}}(n)} =0$, where $g_{{{\left \vert}\Psi {\right \rangle}}}(n)$ is Vidal’s monotone of ${{\left \vert}\Psi {\right \rangle}}$. Define ${{\left \vert}\xi _r {\right \rangle}} =
\frac{1}{C_r} \sum _{n=1}^{\infty} e^{-rn} {{\left \vert}n {\right \rangle}}\otimes{{\left \vert}n {\right \rangle}}$, then we have $$\begin{aligned}
{{\left \vert}\xi _r {\right \rangle}} ^{\otimes p}&=& \frac{1}{C_r^p} \sum _{n_1, n_2,
\cdots ,n_p}^{\infty} e^{-r(n_1+n_2+ \cdots +n_p)} {{\left \vert}n_1, n_2,
\cdots , n_p {\right \rangle}}\otimes \left \vert n_1, n_2,\cdots,n_p\right\rangle
\nonumber \;.\!\!\end{aligned}$$ If we reorder the Schmidt terms to the form ${{\left \vert}\xi _r {\right \rangle}}
^{\otimes p} = \frac{1}{C} \sum _{k=1}^{\infty} f(k)
{{\left \vert}k {\right \rangle}}\otimes {{\left \vert}k {\right \rangle}}$ we can see by easy calculation that $f(k)
\le e^{-r[ (p! k )^{1/p} +1]}$. Thus, we have $\lim _{n
\rightarrow \infty} {f(n)}/{p(n)} =0$ and this means ${{\left \vert}\Psi {\right \rangle}}
^{\otimes p}$ cannot converted into ${{\left \vert}\Phi {\right \rangle}}$ for any $p \in
\mathbb{N}$. This result shows that in infinite-dimensional systems some classes of states (like states with finite Schmidt ranks, with exponentially-damped Schmidt coefficients, and with polynomially-damped Schmidt coefficients) can be distinguished from each other more strongly than the case of finite-dimensional systems by SLOCC classification. Thus, *with arbitrary finitely-many copies*, we also cannot convert the states from finite rank to infinite rank, and similarly from exponentially damped to polynomially damped. In finite-dimensional systems, there is no feature like this. Therefore, these properties of entanglement are genuine for infinite-dimensional systems and show the special strong position of states with polynomially-damped Schmidt coefficients from the view of finite-copy transformations.
As a final remark for this section we must discuss the energy of such long-tailed states. In realistic situations the set of states which can be produced experimentally will be limited by some bound in energy. Therefore, it is essential to consider the subset of states which consist of states restricted to that bounded energy. However, for several states with polynomially-damped Schmidt coefficients, the mean value of a polynomial Hamiltonian, like for example the harmonic oscillator, diverges. Therefore, generally only a fraction of polynomially-damped states can be created in laboratories.
Summary
=======
In this paper in order to avoid the difficulties of discontinuity and infinite amounts of classical communication in the theory of SLOCC convertibility of infinite-dimensional systems, we proposed a new definition of convertibility, [*$\epsilon$-convertibility*]{}, as the convertibility of states in an approximated setting by means of the trace norm. In the Section \[infinite Nielsen\] we showed that this definition guarantees at least weak continuity for SLOCC convertibility (Lemma \[epsilon SLOCC lemma\]), and guarantees that the protocol only uses finite amounts of classical communication. Then, we reconstructed the basic theorems of single-copy LOCC and SLOCC transformation, Neilsen’s and Vidal’s theorem in the infinite-dimensional pure state space (Theorems \[epsilon Nielsen\] and \[epsilon Vidal\]). As a result we showed that under this change of definition the framework of entanglement convertibility is preserved for infinite-dimensional systems, and therefore, our definition of $\epsilon$-convertibility for LOCC is suitable and sufficient for realistic conditions of quantum information processing in infinite-dimensional systems.
In Section \[Extension\] in order to study SLOCC convertibility in infinite-dimensional systems, we constructed a pair of SLOCC monotones which can be considered as extensions of the Schmidt rank to infinite-dimensional spaces. By these monotones we showed that states with polynomially-damped Schmidt coefficients belong to a higher rank of entanglement class than other states in terms of single-copy SLOCC convertibility.
In the last Section \[inhibition law\] we showed that arbitrary finitely many copies of exponentially-damped states cannot be converted to even a single copy of polynomially-damped states. Since such differences of classes do not exist in the finite-dimensional setting, the SLOCC classification of infinite-dimensional states has a much richer structure than for finite-dimensional ones. Therefore, these new features of entanglement have the potential to produce new quantum information protocols which are impossible for finite-dimensional systems. Finally, we stress that in infinite-dimensional systems, there remain important problems that are yet to be solved even for the simplest bipartite pure states.
MO is grateful to Professor M. Ozawa, Professor M.B. Plenio, Professor K. Matsumoto, Professor M. Hayashi, and Dr. A.Miyake for discussions. This work has been supported by the Asashi Grass Foundation, the Sumitomo Foundation, the Japan Society of Promotion of Science, the Japan Scholarship Foundation, the Japan Science and Technology Agency, and the Special Coordination Funds for Promoting Science and Technology.
[000]{} A. Einstein, B. Podolsky, and N. Rosen [*Phys. Rev.*]{} [**47**]{}, 777 (1935); J.S. Bell [*Physics*]{} [**1**]{}, 195 (1964); J.F. Clauser, M.A. Horne, A. Shimony. R.A.Holt [*Phys. Rev. Lett.*]{} [**23**]{}, 880 (1969); R.F.Werner, [*Phys. Rev. A*]{} [**40**]{} 4277 (1989).
A.K. Ekert, [*Phys. Rev. Lett.*]{} [**68**]{}, 661 (1991); P.W. Shor, Proc. [*35nd Annual Symposium on Foundations of Computer Science*]{}, (IEEE Computer Society Press, 1994), 124-134.
C.H. Bennett and S.J. Wiesner [*Phys. Rev. Lett.*]{} [**69**]{}, 2884 (1992); C.H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, and W.K.Wootters, [*Phys. Rev. Lett.*]{} [**70**]{}, 1895 (1993).
J. von Neumann, [*Mathematical Foundations of Quantum Mechanics*]{} (Princeton University Press, Princeton, New Jersey, 1955).
M. Reed, B. Simon [*Functional Analysis (Methods of Modern Mathematical Physics)*]{} (Academic Press, 1980).
C.H. Bennett, D.P. DiVincenzo, J.A. Smolin and W.K. Wootters, [*Phys. Rev. A*]{} [**54**]{}, 3824 (1996); C.H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J.A.Smolin and W.K. Wootters, [*Phys. Rev. Lett.*]{} [**76**]{}, 722 (1996); C.H. Bennett, H.J. Berstein, S. Popescu and B.Schumacher, [*Phys. Rev. A*]{} [**53**]{}, 2046 (1996).
M.A. Nielsen, [*Phys. Rev. Lett.*]{} [**83**]{}, 436 (1999).
G. Vidal, [*Phys. Rev. Lett.*]{} [**83**]{}, 1046 (1999).
G. Vidal [*J. Mod. Opt.*]{} [**47**]{}, 355 (2000).
L.-M. Duan, G. Giedke, J.I. Cirac, and P. Zoller, [*Phys. Rev. Lett.*]{} [**84**]{}, 2722 (2000); R.Simon, [*Phys. Rev. Lett.*]{} 84, 2726 (2000); G. Giedke, B.Kraus, M. Lewenstein, J.I. Cirac, [*Phys. Rev. A*]{} [**64**]{}, 052303 (2001).
G. Giedke, J. Eisert, J.I. Cirac and M.B.Plenio, [*Quant. Inf. Comp.*]{} [**3**]{}, 211 (2003); G.Giedke, M.M. Wolf, O. Kruger, R.F. Werner and J.I. Cirac, [*Phys. Rev. Lett.*]{} [**91**]{}, 107901 (2003); M.M. Wolf, G.Giedke, O. Kruger, R.F. Werner and J.I. Cirac, [*quant-ph/*]{}0306177 (2003).
G. Giedke and J.I. Cirac [*Phys.Rev. A*]{} [**66**]{}, 032316 (2002); J. Fiurasek [*Phys. Rev.Lett.*]{} [**89**]{}, 137904 (2002); J. Eisert, S. Scheel and M.B.Plenio, [*Phys. Rev. Lett.*]{} [**89**]{}, 137903 (2002); J.Eisert, D. Browne, S. Scheel and M.B. Plenio, [*Ann.Phys.*]{} (NY) [**311**]{}, 431 (2004).
J. Eisert, C. Simon, and M.B.Plenio, [*J. Phys. A*]{} [**35**]{}, 3911, (2002); M. Keyl, D.Schlingemann, and R.F. Werner, [*Quant. Inf. Comp.*]{} [**3**]{}, 281 (2003); M.M. Wolf, G. Giedke, O. Krueger, R.F. Werner, J.I. Cirac [*Phys. Rev. A*]{} [**69**]{}, 052320 (2004).
V. Vedral, M.B. Plenio, M.A. Rippin, P.L.Knight, [*Phys. Rev. Lett.*]{} [**78**]{}, 2275 (1997); E.M.Rains, [*IEEE Trans. Inf. Tec.*]{} [**47**]{}, 2921 (2001).
M.J. Donald, M. Horodecki, O. Rudolph, [*J. Math. Phys.*]{} [**43**]{}, 4252 (2002).
H.-K. Lo and S. Popescu, [*Phys. Rev.A*]{} [**63**]{}, 022301 (2001).
A.N. Kolmogorov and S.V. Fomin, [*Introductory real analysis*]{} (Dover Publications, Inc, 1975)
R. Bhatia, [*Matrix analysis*]{} (Springer-Verlag, New York, 1997).
G.M. D’Ariano, M.F. Sacchi, [*Phys.Rev. A*]{} [**67**]{}, 042312 (2003).
A.S. Markus, [*Russian Math. Surveys*]{} [**19**]{}, 91 (1964)
\[Lo-Popescu section\]
In this appendix and the next, as a preparation for the proofs of Nielsen’s and Vidal’s theorem in infinite dimensional systems, we will see how we can extend basic theorems about LOCC and majorization [@majorization; @vidal] to infinite dimensional systems.
At first, we extend the concept of Schmidt decomposition and Schmidt coefficients in infinite-dimensional systems:
[(Schmidt decomposition)]{} For any ${{\left \vert}\Psi {\right \rangle}} \in {\mathcal{H}}= {\mathcal{H}}_A \otimes {\mathcal{H}}_B$, there exist orthonormal sets ([*but not necessarily basis sets*]{}) $\{
{{\left \vert}e_i {\right \rangle}} \} _{i=1}^{\infty} $ and $\{ {{\left \vert}f_i {\right \rangle}} \}
_{i=1}^{\infty} $ of ${\mathcal{H}}_A$, and ${\mathcal{H}}_B$, respectively, such that $$\label{Schmidt inf}
{{\left \vert}\Psi {\right \rangle}} = \sum _{i=1}^{\infty} \sqrt{\lambda _i} {{\left \vert}e_i {\right \rangle}}
\otimes {{\left \vert}f_i {\right \rangle}} \:,$$ where $\lambda _i \ge 0$, $\lambda _i \ge \lambda _{i+1} $ and $\sum _{i=1}^{\infty} \lambda _i = 1$. The representation of a state ${{\left \vert}\Psi {\right \rangle}}$ in the form of Eq.(\[Schmidt inf\]) is called a Schmidt decomposition and $\{ \lambda _i \} _{i=1}^{\infty}$ are called Schmidt coefficients in infinite-dimensional systems.
We use the singular value decomposition given as follows in an infinite dimensional system: For a compact operator $M$ from ${\mathcal{H}}_A$ onto ${\mathcal{H}}_B$, there exist orthonormal sets (but not necessarily basis sets) $\{ {{\left \vert}e_i {\right \rangle}} \}_{i=1}^{\infty} \subset {\mathcal{H}}_A$ and $\{ {{\left \vert}f_i {\right \rangle}} \}_{i=1}^{\infty} \subset {\mathcal{H}}_B$ and positive real numbers $\{ \lambda _i \}_{i=1}^{\infty}$ with $\sqrt{\lambda _n} \rightarrow 0$ such that $$\label{Schmidt eq}
M = \sum _{i=1}^{\infty} \sqrt{\lambda _i}{{\left \vert}e _i {\right \rangle}} {{\left \langle}f _i {\right \vert}},$$ where the above sum converges in the operator norm [@functional; @analysis]. In particular, if $M$ is a Hilbert-Schmidt class operator, $\{ \sqrt{\lambda _i } \}_{i=1}^{\infty}$ satisfy $\sum
_{i=1}^{\infty} \lambda _i = (\| M \|_2)^2 \stackrel{\rm def}{=}
{\mathrm{tr}}M^{\dagger}M$, where $\| \cdot \|_2$ is the Hilbert-Schmidt norm [@functional; @analysis]. Thus, we derive Eq.(\[Schmidt inf\]) from Eq.(\[Schmidt eq\]), because the linear map ${{\left \vert}e_i {\right \rangle}} {{\left \langle}f_j {\right \vert}} \mapsto {{\left \vert}e_i {\right \rangle}} \otimes {{\left \vert}f_j {\right \rangle}}$ gives an isomorphism from the Hilbert-Schmidt space $\mathfrak{C} _2({\mathcal{H}}_A, {\mathcal{H}}_B)$ (the Hilbert space of all Hilbert-Schmidt class operators between ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$ with the inner product $\left(M |N \right) \stackrel{\rm def}{=} {\mathrm{tr}}M^{\dagger} N $) to the Hilbert space ${\mathcal{H}}= {\mathcal{H}}_A \otimes {\mathcal{H}}_B$ [@functional; @analysis]. $\square$
In finite $d$-dimensional bipartite systems, the Schmidt decomposition of a state ${{\left \vert}\psi {\right \rangle}}$ is given by $$\label{Schmidt}
{{\left \vert}\psi {\right \rangle}} = \sum _{i=1}^{d} \sqrt{\lambda _i} {{\left \vert}e_i {\right \rangle}}
\otimes {{\left \vert}f_i {\right \rangle}}\;,$$ where $\{ {{\left \vert}e_i {\right \rangle}} \} _{i=1}^{d}$ and $\{ {{\left \vert}f_i {\right \rangle}} \}
_{i=1}^{d}$ are the basis sets. Therefore, convertibility of states under local [*unitary*]{} operations are determined by the Schmidt coefficients $\{ \lambda _i \} _{i=1}^{d}$ of states, namely, the two states are convertible to each other under local unitary operations if and only if the two states have same Schmidt coefficients. In infinite-dimensional systems, the Schmidt coefficients determine convertibility of states under local [*partial isometry*]{} instead of local unitary operations. That is, if the Schmidt coefficients of $ {{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ are the same, then there exist local partial isometries $U_A$ and $U_B$ and ${{\left \vert}\Psi {\right \rangle}} = U_A \otimes U_B {{\left \vert}\Phi {\right \rangle}}$ is satisfied.
Partial isometry is defined as a unitary operator between subspaces. If we had defined the Schmidt coefficients to be a sequence including the dimension of the kernel of the reduced density matrix (of the given state), we could make Schmidt coefficients indicating the convertibility under local unitary operations. However, to develop the theory of LOCC and SLOCC, (which include local partial isometries), convertibility for infinite-dimensional systems, the former definition is more suitable than the latter, so we take the definition of Eq. [\[Schmidt inf\]]{}. This is because states are convertible to each other by LOCC, if and only if they are convertible to each other by local partial isometries (we can show this fact from Theorem \[infinite Nielsen\]); moreover, there exists a pair of states which are convertible to each other by local partial isometries, but not by local unitaries. The following example satisfies such a condition. Suppose states ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ on ${\mathcal{H}}_A
\otimes {\mathcal{H}}_B$ are defined as $$\begin{aligned}
{{\left \vert}\Psi {\right \rangle}} &=& \sum _{i=1}^{\infty} \sqrt{\lambda _i} {{\left \vert}e_i {\right \rangle}}
\otimes {{\left \vert}f_i {\right \rangle}} \label{def of psi} \\
{{\left \vert}\Phi {\right \rangle}} &=& \sum _{i=1}^{\infty} \sqrt{\lambda _i} {{\left \vert}e_{2i} {\right \rangle}}
\otimes {{\left \vert}f_{2i} {\right \rangle}}, \label{def of phi}\end{aligned}$$ where $\{ {{\left \vert}e_i {\right \rangle}} \}_{i=1}^{\infty}$ and $\{ {{\left \vert}f_i {\right \rangle}}
\}^{\infty}$ are orthonormal basis sets of ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$, respectively. In this case, ${{\left \vert}\Phi {\right \rangle}}$ and ${{\left \vert}\Psi {\right \rangle}}$ can be convertible to each other by LOCC: ${{\left \vert}\Phi {\right \rangle}}$ can be convertible to ${{\left \vert}\Psi {\right \rangle}}$ by a local partial isometry $\sum
_{i=1}^{\infty} {{\left \vert}e_{i} {\right \rangle}}{{\left \langle}e_{2i} {\right \vert}} \otimes
{{\left \vert}f_i {\right \rangle}}{{\left \langle}f_{2i} {\right \vert}}$, and ${{\left \vert}\Psi {\right \rangle}}$ can be convertible to ${{\left \vert}\Phi {\right \rangle}}$ by a local isometry $\sum _{i=1}^{\infty}
{{\left \vert}e_{2i} {\right \rangle}}{{\left \langle}e_{i} {\right \vert}} \otimes {{\left \vert}f_{2i} {\right \rangle}}{{\left \langle}f_{i} {\right \vert}}$. However, their are not convertible by local unitaries. This is because a subspace spanned by $\{ {{\left \vert}e_{2i} {\right \rangle}} \}_{i=1}^{\infty}$ and a subspace spanned by $\{ {{\left \vert}f_{2i} {\right \rangle}}_{i=1}^{\infty}$ should be mapped to ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$, respectively; a proper subspace should be mapped to a whole space. This map is obviously impossible by unitary operators, which are bijections and always map a whole space to a whole space.
Actually, from a physical point of view, convertibility under local partial isometry can be understood that we may need additional independent ancilla systems for each subspace to convert ${{\left \vert}\Psi {\right \rangle}}$ to ${{\left \vert}\Phi {\right \rangle}}$. For example, ${{\left \vert}\Phi {\right \rangle}}$ defined as Eq.(\[def of phi\]) can be transformed to ${{\left \vert}\Psi {\right \rangle}}$ defined as Eq.(\[def of psi\]) by the following protocol: First, attach one-qubit ancilla systems ${\mathcal{H}}_{A'}$ and ${\mathcal{H}}_{B'}$ to both local systems ${\mathcal{H}}_A$ and ${\mathcal{H}}_B$, and prepare the ancilla systems in ${{\left \vert}0 {\right \rangle}}_{A'}$ and ${{\left \vert}0 {\right \rangle}}_{B'}$. Second, apply a local unitary transformation $U_{AA'} \otimes U_{BB'}$ to the states ${{\left \vert}\Phi {\right \rangle}}_{AB} \otimes {{\left \vert}0 {\right \rangle}}_{A'} \otimes
{{\left \vert}0 {\right \rangle}}_{B'}$, where $U_{AA'}$ and $U_{BB'}$ are unitary transformations on ${\mathcal{H}}_A \otimes {\mathcal{H}}_{A'}$ and ${\mathcal{H}}_B \otimes
{\mathcal{H}}_{B'}$ defined as $$\begin{aligned}
U_{AA'} &\stackrel{\rm def}{=}& \sum _{n=1}^{\infty} \big(
|n \rangle_{A} |0 \rangle _{A'} {}_{A} \langle 2n | {}_{A'} \langle 0| +
| 2n+1 \rangle _A |1 \rangle _{A'} {}_{A}\langle 2n+1| {}_{A'}\langle 0| \\
&\quad & \qquad + |2n \rangle_A |1 \rangle _{A'} {}_A \langle n|
{}_{A'}\langle 1| \big) \\
U_{BB'} &\stackrel{\rm def}{=}& \sum _{n=1}^{\infty} \big(
|n\rangle_{B} |0\rangle_{B'} {}_B \langle 2n| {}_{B'}\langle 0| +
| 2n+1 \rangle_B |1\rangle_{B'} {}_B\langle 2n+1| {}_{B'}\langle 0| \\
&\quad & \qquad + |2n\rangle_B |1\rangle_{B'} {}_B\langle n| {}_{B'}\langle 1| \big).\end{aligned}$$ After this local unitary transformation, the state is changed to ${{\left \vert}\Psi {\right \rangle}} _{AB} \otimes {{\left \vert}0 {\right \rangle}}_{A'} \otimes {{\left \vert}0 {\right \rangle}}_{B'}$. Finally, by removing the ancilla system ${\mathcal{H}}_{A'}$ and ${\mathcal{H}}_{B'}$, we derive ${{\left \vert}\Psi {\right \rangle}}$ on the systems ${\mathcal{H}}_A \otimes {\mathcal{H}}_B$.
For finite-dimensional systems, Lo and Popescu proved that if ${{\left \vert}\Psi {\right \rangle}}$ can be converted into ${{\left \vert}\Phi {\right \rangle}}$ by LOCC, there exists a one-way classical communication LOCC which consists of a local measurement of one of the local spaces and a unitary operation of the other depending on the result of the measurement [@lo-popescu]. This theorem is called the Lo-Popescu theorem. Intuitively speaking, the Schmidt decomposition denotes the existence of symmetry between local subspaces, and Lo-Popescu’s theorem is the reflection of the symmetry of subsystems. As we have shown that the Schmidt decomposition in infinite-dimensional systems is weaker (indicating equivalence under partial isometries instead of unitary operations) than finite-dimensional systems, the corresponding Lo-Popescu theorem is slightly modified as follows.
\[Lo-Popescu\] In the separable Hilbert space ${\mathcal{H}}_A \otimes {\mathcal{H}}_B$ , for any given state ${{\left \vert}\Psi {\right \rangle}}$ and bounded operator $M \in {\mathfrak{B}}({\mathcal{H}}_B)$ there exist a bounded operator $N \in {\mathfrak{B}}({\mathcal{H}}_A)$ and [*partial isometry*]{} $U \in {\mathfrak{B}}({\mathcal{H}}_B)$ which satisfy $ I \otimes M
{{\left \vert}\Psi {\right \rangle}} = N \otimes U {{\left \vert}\Psi {\right \rangle}}$.
Suppose ${{\left \vert}\Psi {\right \rangle}} = \sum _{i=1}^{\infty} \sqrt{\mu _i} {{\left \vert}a_i {\right \rangle}}
\otimes {{\left \vert}b_i {\right \rangle}}$. Define a partial isometry $U$ as $U= \sum
_{i=1}^{\infty} {{\left \vert}b_i {\right \rangle}} {{\left \langle}a_i {\right \vert}}$, then we have $$I \otimes M {{\left \vert}\Psi {\right \rangle}} =\sum _{i=1}^{\infty} \sqrt{\mu _i}
{{\left \vert}a_i {\right \rangle}} \otimes M {{\left \vert}b_i {\right \rangle}} \nonumber \;,$$ and $$MU\otimes I {{\left \vert}\Psi {\right \rangle}} =\sum _{i=1}^{\infty} \sqrt{\mu _i} M
{{\left \vert}b_i {\right \rangle}} \otimes {{\left \vert}b_i {\right \rangle}}\nonumber \;.$$ Thus, we obtain $$\begin{aligned}
{\mathrm{tr}}\; _A I \otimes M {{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} I\otimes M^{\dagger} =
{\mathrm{tr}}\; _B MU \otimes I {{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} U^{\dagger}
M^{\dagger} \otimes I\;.\nonumber\end{aligned}$$ By our definition of Schmidt decomposition, $\rho _A$ and $\rho
_B$ are partial isometry equivalent for any state ${{\left \vert}\Psi {\right \rangle}}$. Therefore, there exist partial isometries $U_A$ and $U_B$ which satisfy $$U_A \otimes U_B (MU \otimes I) {{\left \vert}\Psi {\right \rangle}} = I \otimes M
{{\left \vert}\Psi {\right \rangle}}\nonumber\;.$$ Defining $N \stackrel{\rm def}{=} U_A MU \otimes U_B $, then the theorem has been proven. $\square$
\[Majorization\]
In finite-dimensional systems, majorization is a pseudo partial ordering on the whole vector space [@Bhartia]. On the other hand in infinite-dimensional system, majorization can be defined on only a subset of the whole vector space. To formulate our definition of Schmidt coefficients in infinite-dimensional systems, we define a majorization on $$l _1 = \{ \{ x_i \} _{i=1}^{\infty} | \sum _{i=1}^{\infty} |x_i| <
\infty \}\;.$$ For mathematical simplicity, we only define majorization on $$\begin{aligned}
l_1 ^+ = \{ \{ x \} _{i=1}^{\infty } \in l_1 | x_i \ge 0,
\sharp \{ i | x_i=0 \} < \infty \nonumber \vee \ \sharp \{ i | x_i > 0 \} < \infty \}\;,\end{aligned}$$ where $\sharp$ denotes the cardinality of a set. In this case $l_1^+$ is a convex cone of $l_1$ and identifies the set of all permutations of Schmidt coefficients. Thus, $l_1^+$ is enough for our purposes, and we do not need to define majorization for all element of $l_1$. Moreover, in the following definition of majorization, we use decreasing reordering of $x \in l_1$. However, if $x$ is not in $l_1^+$, it is difficult to rearrange elements of $x$ in decreasing order, and we need to extend the definition of majorization so as to include sequences which are not in $l_1^+$, but in $l_1$. This is also the reason why we define majorization only on $l_1^+$.
Now, we define majorization of Schmidt coefficients in infinite-dimensional systems as follows:
For any $x , y \in l_1^+$, $x \prec _{\omega} y $ (or $x$ is sub-majorized by $y$) is defined, if and only if $$\sum _{i=1}^k x^{\downarrow} _i \le
\sum _{i=1}^k y^{\downarrow}_i\;,$$ for $k \in \mathbb{N}$, where $x^{\downarrow} _i$ is given by $x^{\downarrow} _i = x_{P(i)}$, and $P$ is an element of an infinite symmetry group satisfying $x^{\downarrow} _i \ge x^{\downarrow}
_{i+1}$ in decreasing reordering of $x$. Similarly, if $x$ and $y$ satisfy, $$\label{super-majorization}
\sum _{i=k}^{\infty} x^{\downarrow} _i \ge \sum _{i=k}^{\infty}
x^{\downarrow} _i\;,$$ then, we write $x \prec ^{\omega} y$ and say $x$ is super-majorized by $y$.
Additional to the sub-majorization or super-majorization conditions, (both conditions lead to the same majorization condition) if $x$ and $y$ satisfy the normalization condition $$\sum _{i=1}^{\infty} x^{\downarrow} _i
= \sum_{i=1}^{\infty} y^{\downarrow} _i\;,$$ then we write $x \prec y$ and say $x$ is majorized by $y$.
The sub-majorization condition does not require the normalization condition, but it is easily proven that $\mathbf{x} \prec
_{\omega} \mathbf{y} $ is equivalent to $$\begin{aligned}
\sum _{j=1}^{\infty} (\mathbf{x_j^{\downarrow} } -t)^+ \le
\sum _{j=1}^{\infty} (\mathbf{y_j^{\downarrow} } -t)^+\;,\end{aligned}$$ for all real $t$, where $z^+ = \max (z,0) $ is the positive part of any real number.
Uhlmann’s theorem relates operations on quantum states and majorization conditions. This theorem is one of the essential items for proving Nielsen’s theorem for LOCC convertibility. To prove Uhlmann’s theorem in infinite-dimensional systems, we define a doubly stochastic matrix in infinite-dimensional systems. It is similar to the one for finite-dimensional systems. For all double sequences $\{ d_{ij} \}_{ij = 1}^{\infty}$ which satisfy $\sum_{i=1}^{\infty} d_{ij} =1$ for all $j$, and $\sum
_{j=1}^{\infty} d_{ij} = 1$ for all $i$, we can define a bounded linear operator $D \in {\mathfrak{B}}(l_1)$ by $$\begin{aligned}
D \mathbf{x} = \{ \sum_{j=1}^{\infty} d_{ij}
\mathbf{x_j} \} _{i=1}^{\infty}\;,\end{aligned}$$ for all $\mathbf{x} \in l_1$. These operators are called doubly stochastic matrices on $l_1$. We can easily see that the operator norm of a doubly stochastic matrix is $1$.
The defined doubly stochastic matrices are related to majorization as follows: If $D$ is doubly stochastic, then, for all $\mathbf{x} \in
S$, we have $$\begin{aligned}
\label{ds}
D \mathbf{x} \prec \mathbf{x}\;,\end{aligned}$$ since $$\begin{aligned}
\sum_{i=1}^{\infty} [(Dx)_i - t ]^+ &=&
\sum_{i=1}^{\infty}
\Bigl[ \sum_{j=1}^{\infty} D_{ij} ( x_j - t ) \Bigr]^+ \nonumber\\
&\le& \sum_{i=1}^{\infty}
\sum_{j=1}^{\infty} D_{ij} ( x_j - t )^+ \nonumber\\
&=& \sum_{j=1}^{\infty} (x_j - t )^+\;,\end{aligned}$$ for any real $t$, due to the convexity of $( )^+$. On the other hand, $\sum_{i=1}^{\infty} (Dx)_i = \sum_{i=1}^{\infty} x_i$ is trivial. Thus, the necessary condition (majorization condition) for the doubly stochastic condition is proven. We note that Eq.(\[ds\]) is also valid for weaker conditions of $D$ than the doubly stochastic matrix, for example, $D$ satisfying $\sum _{j=1}^{\infty} d_{ij}
\le 1$.
Now we are ready to extend Uhlmann’s theorem for infinite-dimensional systems:
\[Uhlmann\] If the two density operators $\rho _1$ and $\rho _2$ on the infinite separable Hilbert space ${\mathcal{H}}$ satisfy the following relation, $$\begin{aligned}
\rho _1 = \sum _{j=1}^{\infty} p_j U_j \rho _2 U_j^{\dagger}, \label{Uhlmann eq}\\
\qquad \sum _{j=1}^{\infty} p_j = 1 ,\end{aligned}$$ where $U_j$ are partial isometries whose initial space includes closure of the range of $\rho _2$, ${\rm ker U_j}^{\perp} \supset
\overline{\rm Ran \rho _2} $, then the non-zero eigenvalue of $\rho
_1$ is majorized by $\rho _2$.
Suppose $\rho _1 = \sum _{k=1}^{\infty} \mu _k {{\left \vert}e_k {\right \rangle}}
{{\left \langle}e_k {\right \vert}}$ and $\rho _2 = \sum _{i=1}^{\infty} \lambda _i {{\left \vert}i {\right \rangle}}
{{\left \langle}i {\right \vert}}$. Define a partial isometry as $V = \sum _{i=1}^{\infty}
{{\left \vert}i {\right \rangle}} {{\left \langle}e_i {\right \vert}}$, whose initial space is given by $\overline{\rm
Ran \rho _1}$, and whose final space is given by $\overline{\rm Ran
\rho _2}$. Then $U_j V$ is a partial isometry whose initial and final space are given by $\overline{\rm Ran \rho _1}$ and $U_j(
\overline{\rm Ran \rho _2} )$, respectively. Actually, it is trivial that $U_j V$ is a zero operator on ${ \overline{\rm Ran
\rho} }^{\perp}$. Now suppose ${{\left \vert}\Phi {\right \rangle}} \in \overline{\rm Ran
\rho _1}$, then we obtain $\| U_j V {{\left \vert}\Phi {\right \rangle}} \| = \| {{\left \vert}\Phi {\right \rangle}}
\|$ from the condition $V {{\left \vert}\Phi {\right \rangle}} \in \overline{\rm Ran \rho
_2}$.
Next, define Fourier’s coefficients of $U_j {{\left \vert}i {\right \rangle}}$ as $u
_{ji}^h$, that is, $U_j {{\left \vert}i {\right \rangle}} = U_j V {{\left \vert}e_i {\right \rangle}} =\sum
_{h=1}^{\infty} u _{ji}^h {{\left \vert}e_h {\right \rangle}}$, and $$\label{normalization of u}
\sum _{h=1}^{\infty} | u _{ji}^h | ^2 =1.$$ Then, we can rewrite Eq.(\[Uhlmann eq\]) as, $$\sum _{k=1}^{\infty} \mu _k {{\left \vert}e_k {\right \rangle}} {{\left \langle}e_k {\right \vert}} = \sum _{ij \in
N^2} p_j \lambda _i (\sum _{h=1}^{\infty} u _{ji}^h {{\left \vert}e_h {\right \rangle}} )
(\sum _{l=1}^{\infty} {u^{\ast}} _{ji}^l {{\left \langle}e_l {\right \vert}} )\;.$$ Taking the inner product between ${{\left \vert}e_n {\right \rangle}}$, $$\mu _n = \sum _{ij \in \mathbb{N}^2} p_j \lambda _i | u _{ji}^n | ^2
= \sum _{i=1}^{\infty} \lambda _i \sum _{j=1}^{\infty} p_j
| u_{ji}^n | ^2\;.$$ We define $D _{ni}$ as $D _{ni} = \sum _{j=1}^{\infty} p_j |
u_{ji}^n | ^2$. (\[normalization of u\]) guarantees that $\sum
_{n=1}^{\infty} D _{ni} = 1$. Since $$\begin{aligned}
\sum _{i=1}^{\infty} | u_{ji}^n |^2 &=& \sum _{i=1}^{\infty}
| {{\left \langle}e_n {\right \vert}} U_j V {{\left \vert}e_i {\right \rangle}} |^2 \le \| U_j V {{\left \vert}e_n {\right \rangle}} \| ^2 \nonumber\\
&\le& \| U_j \|_{\rm op} ^2 \| V \|_{\rm op} ^2 \| {{\left \vert}e_n {\right \rangle}} \| ^2 =1\; ,\end{aligned}$$ where $\| \cdot \|_{\rm op}$ is the operator norm and $\sum
_{i=1}^{\infty} D _{ni} \le 1$. Therefore, using the necessary condition of the double stochastic matrices and the weaker condition of (\[ds\]) for $D$, we derive $\mu = D \lambda
\prec \lambda$. The theorem is proven.
\[Proof of Nielsen\]
In this appendix, based on Appendix A and B, we prove Nielsen’s [@majorization] theorem for infinite dimensional systems.
Before we show the proof of Nielsen’s theorem of $\epsilon$-convertibility in infinite-dimensional systems, we first show that the necessary part of the original proof of Nielsen’s theorem [@majorization] can be directly extended to infinite-dimensional systems by means of the Lo-Popescu and Uhlmann theorems we have already proven in the last section.
\[necessity Nielsen\] The necessary condition for convertibility of an infinite-dimensional state ${{\left \vert}\Psi {\right \rangle}}$ to another state ${{\left \vert}\Phi {\right \rangle}}$ under LOCC operations is given by $\lambda \prec
\mu$, where $\lambda = \{ \lambda \} _{i=1}^{\infty}$ and $\mu =
\{ \mu \} _{i=1}^{\infty} $ are the sequences of Schmidt coefficients of the states ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$, respectively.
Suppose ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi {\right \rangle}}$ by LOCC, then By Lo-Popescu’s theorem, $\rho _{\Phi} = p_m M_m \rho _{\Psi}
M _m^{\dagger}$ where $\sum _{m=1}^{\infty } p_m = 1$, $\rho
_{\Psi} = {\mathrm{tr}}\; _B ( {{\left \vert}\Psi {\right \rangle}} {{\left \langle}\Psi {\right \vert}} )$ and $\rho _{\Phi}
={\mathrm{tr}}\; _B ( {{\left \vert}\Phi {\right \rangle}} {{\left \langle}\Phi {\right \vert}} )$. Then, according to the same method of Nielsen’s original proof, we derive $\rho _{\Psi} = \sum
_{m=1}^{\infty } p_m U_m \rho _{\Phi } U_m^{\dagger }$, where $U_m$ is a partial isometry originating in the polar decomposition of $M_m \sqrt{\rho _{\Psi }}$. Since $\ker U_m ^{\perp } = \ker
M_m \sqrt{\rho _{\Psi }}^{\perp } \supset \overline{\rm Ran \rho
_{\psi } }$ [@functional; @analysis], by Uhlmann’s theorem we get $\lambda \prec \mu$. $\square$
By means of this lemma, we can prove the necessary part of Nielsen’s theorem in infinite-dimensional systems. For the sufficient condition, we fully use the properties of $\epsilon$-convertibility.
[(Theorem \[epsilon Nielsen\])]{}
Only if part: If ${{\left \vert}\Psi {\right \rangle}}$ is $\epsilon$-convertible to ${{\left \vert}\Phi {\right \rangle}}$ for any $\epsilon > 0$, there exists a sequence of states $\{ {{\left \vert}\Phi' _n {\right \rangle}} \} _{n=1}^{\infty}$ which strongly converges to ${{\left \vert}\Phi {\right \rangle}}$ (for pure states the topology of the trace norm is stronger than the strong topology of Hilbert space). Then, from Lemma \[necessity Nielsen\], $\lambda \prec \mu ^{'} _n$ where $\mu ^{'} _n$ and $\lambda$ are the Schmidt coefficients of ${{\left \vert}\Phi' _n {\right \rangle}}$ and ${{\left \vert}\Psi {\right \rangle}}$ for all $n \in \mathbb{N}$. Because Schmidt coefficients are continuous in the strong topology, $\sum _{i=1}^{n} \mu ^{'} _{n,i} \ge \sum _{i=1}^{n} \lambda _i$ means $\sum _{i=1}^{n} \mu _i \ge \sum _{i=1}^{n} \lambda _i$ where $\mu _i$ are the Schmidt coefficients of ${{\left \vert}\Phi {\right \rangle}}$.
If part: When the Schmidt ranks (the number of non-zero Schmidt coefficients) of both ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ are finite, the proof is identical to the one for finite-dimensional systems. By the definition of Schmidt decomposition, we can assume ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ have the same Schmidt basis without loss of generality. In what follows, we divide the proof for the remaining cases into two parts: the case where ${{\left \vert}\Psi {\right \rangle}}$ has finite Schmidt rank, and the case where both of the states have infinite Schmidt ranks.
1\) The case that ${{\left \vert}\Psi {\right \rangle}}$ has infinite Schmidt rank and ${{\left \vert}\Phi {\right \rangle}}$ has finite Schmidt rank:\
Suppose the Schmidt rank of ${{\left \vert}\Phi {\right \rangle}}$ is given by $N$. In what follows, we assume $\epsilon$ is arbitrary, but satisfies $\epsilon < \mu _N$. Since for any Schmidt coefficient $\{ \lambda
_i \}_{i=1}^{\infty}$, we have $\lim _{n \rightarrow \infty} n
\lambda _n = 0$. Therefore, there exists an $N_1 (\epsilon ) $ such that $n \lambda _n < \epsilon /2$ for any $ n \ge N_1$. On the other hand, since $\sum _{i=1}^{\infty} \lambda _i =1$ , there exists an $N_2 (\epsilon )$ such that $\sum _{i=n}^{\infty} \lambda
_i < \epsilon /2$ for any $ n \ge N_1$. Suppose $M = \max (N_1,
N_2, N)$, then we define $\{ {\mu '}_i \}$ as follows:
= For = $1 \le i \le N-1 $ = : ${\mu '}_i = \mu _i$\
$i=N$ : ${\mu '}_N = \mu _N - \left ((M-N)
\lambda _M + \sum _{n=M+1}^{\infty} \lambda _n \right)$\
$N+1 \le i \le M$ : ${\mu '}_i = \lambda _M$\
$M+1 \le i $ : ${\mu ' }_i = \lambda _i$
We define ${{\left \vert}\Phi ' {\right \rangle}}$ as ${{\left \vert}\Phi ' {\right \rangle}} = \sum _{i=1}^{\infty}
\sqrt{{\mu '}_i} {{\left \vert}i {\right \rangle}} \otimes {{\left \vert}i {\right \rangle}}$. Then, by definition, $\lambda \prec \mu^{'} \prec \mu $. Moreover, we obtain $$\begin{aligned}
\| {{\left \vert}\Phi {\right \rangle}} - {{\left \vert}\Phi ' {\right \rangle}} \| ^2 &=& |(M-N)\lambda _M + \sum
_{n=M+1}^{\infty} n \lambda _n | ^2 \nonumber \\
&=& |(M-N)\lambda _M|^2
+ |\sum _{i=M+1}^{\infty} \lambda_i|^2 \nonumber \\
&\le & \epsilon ^2\;.\end{aligned}$$ Therefore, for any neighborhood of ${{\left \vert}\Phi {\right \rangle}}$, we can find a ${{\left \vert}\Phi' {\right \rangle}}$ such that ${{\left \vert}\Psi {\right \rangle}} \rightarrow {{\left \vert}\Phi' {\right \rangle}}$.
2\) The case that the Schmidt ranks of ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$ are infinity:\
By easy calculation we can show that for any $\epsilon$, there exists a natural number $N_1 (\epsilon )$ such that if ${{\left \vert}\Phi
' {\right \rangle}} = \sum _{i=1}^{\infty} \sqrt{{\mu '} _i} {{\left \vert}i {\right \rangle}} \otimes
{{\left \vert}i {\right \rangle}}$ satisfies $\mu ' _i= \mu$, $i \le N_1 (\epsilon ) $, then $\| {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}} - {{\left \vert}\Phi ' {\right \rangle}}{{\left \langle}\Phi' {\right \vert}} \|_{\rm
tr} < \epsilon $. Since $\sum _{i=1}^n \lambda _i < 1$ and $\lim
_{n \rightarrow \infty} n \lambda _n = 0$ for any $n \in
\mathbb{N}$, $$\lim _{N_2 \rightarrow \infty} \Bigl[\sum _{k=1}^{N_2 (\epsilon )} \lambda _k
- (N_2 - N_1 )\lambda _{N_2} \Bigr] = 1\;.$$ Thus, there exists a natural number $N_2 (\epsilon ) \ge N_1
(\epsilon ) + 1 $ such that $$\label{eq N_2}
\sum _{k=1}^{N_2 (\epsilon )} \lambda _k
- (N_2 - N _1 ) \lambda _{\lambda _{N_2}}
\ge \sum _{i=1}^{N_1 (\epsilon ) } \mu _i\;,$$ and $$\label{epsilon 1}
\sum _{k=1}^{N_2 (\epsilon ) - 1} \lambda _k
- (N_2 - N _1 -1) \lambda _{N_2 - 1}
\le \sum _{i=1}^{N_1 (\epsilon ) } \mu _i\;.$$ We examine in the two cases $N_2(\epsilon ) = N_1(\epsilon ) +1$ and $N_2 (\epsilon ) > N_1(\epsilon ) + 1$ separately.
a\) The case that $N_2(\epsilon ) = N_1(\epsilon ) +1$: The inequalities (\[eq N\_2\]) and (\[epsilon 1\]) guarantee $\sum
_{k=1}^{N_1 (\epsilon )} \lambda _k = \sum _{k=1}^{N_1(\epsilon )}
\mu _k$. If we define $1 \le k \le N_1(\epsilon )$, ${\mu '}_k =
\mu _k$, $k \ge N_1(\epsilon ) + 1$ and ${\mu '}_k = \lambda _k$, $\{ {\mu '}_i \}_{i=1}^{\infty}$ satisfies $\sum _{i=1}^k \lambda
_i \le \sum _{i=1}^k {\mu '}_i \le \sum _{i=1}^k \mu _i$ for all $k \in \mathbb{N}$.
b\) The case that $N_2 (\epsilon ) > N_1(\epsilon ) + 1$: We define $\delta$ as $$\begin{aligned}
\delta = \Big[ \sum _{i=1}^{N_1 (\epsilon )} \lambda _i
+ \sum _{k=N_1 (\epsilon) +1}^{N_2 -1} (\lambda _k - \lambda _{N_2 })
-\sum
_{i=1}^{N_1 (\epsilon ) } \mu _i \Big] /( N_2(\epsilon ) -
N_1(\epsilon ) -1) \:.\end{aligned}$$ Then, since $\delta \ge 0$, we can define ${\mu '}_k$ as the following,
= ${\mu '}_k = \mu _k$ = for $1 \le k \le N_1 (\epsilon )$,\
${\mu '}_k = \lambda _{N_2(\epsilon )} + \delta$ for $N_1(\epsilon )+1 \le k \le N_2(\epsilon ) -1$,\
${\mu '}_k = \lambda _k$ for $N_2(\epsilon ) \le k$
then we have $$\begin{aligned}
\sum _{i=1}^{\infty} {\mu '}_i &=&
\sum _{i=1}^{N_1 (\epsilon )}
\mu _i + \sum _{k=N_1 (\epsilon )+1}^{N_2 (\epsilon )-1}
(\lambda_{N_2(\epsilon )} + \delta )
+\sum _{k=N_2(\epsilon )}^{\infty}
\lambda _k \nonumber\\
&=&1\;.\end{aligned}$$ Therefore, the $\{ \mu '_i \}_{i=1}^{\infty}$ are well defined Schmidt coefficients.
First, we show $\sum _{i=1}^{N} \mu '_i \le \sum _{i=1}^{N} \mu
_i$ for all $N $. Since this condition is trivial for $N \le N_1$ and $N_2 \le N$ by definition of $\{ \mu' _i \}_{i=0}^{\infty}$, we only need to check this condition for $ N_1 + 1 \le N \le N_2 -
1$. Suppose there exists an $N'$ such that $N_1 +1 \le N' \le N_2 -1$ and $\sum _{i= N_1 +1}^{N'} \mu'_i > \sum _{i = N_1 +1}^{N'}\mu
_i$. Then, since $\mu'_i = \lambda _{N_2} + \delta$ for all $N_1
+1 \le i \le N_2 - 1$ and $\mu_i \ge \mu_{i+1} $, we can easily see $\mu' _{N'}
> \mu _{N'}$. That is, $\sum _{i= N_1 +1}^{N} \mu'_i > \sum _{i = N_1 +1}^{N}\mu
_i$ for all $N' \le N \le N _2 -1$, and we can conclude $\sum _{i
= N_1 +1}^{N_2 -1} \mu'_i > \sum _{i=N_1}^{N_2 -1} \mu _i$ which is a contradiction. Therefore, for all $N_1 + 1 \le N \le N_2 -1 $, $\sum _{i = N_1 +1}^{N} \mu'_i \le \sum _{i=N_1}^{N} \mu _i$.
Second, we show $\sum _{i=1}^N \lambda _i \le \sum _{i=1}^N \mu'
_i$ for all $N$. Since this condition is trivial for $N \le N_1$ and $N_2 \le N$, we only check for $N_1 +1 \le N \le N_2 -1$. In this case, we calculate $$\begin{aligned}
\sum _{k=1}^N {\mu '}_k - \sum _{k=1}^N \lambda _k &=& \sum
_{k=1}^{N_1} \mu _k + \sum _{k=N_1+1}^N (\lambda _{N_2} + \delta)
- \sum _{k=1}^N \lambda _k \nonumber \\ &=& \frac{N_2 - N -
1}{N_2 - N_1 - 1} [(\sum _{k=1}^{N_1} \mu _k - \sum _{k=1}^{N_1}
\lambda _k ) - \sum _{k=N_1 +1}^N \lambda _k ] \nonumber \\
&\ & + \frac{N-N_1}{N_2 - N_1 -1} \sum _{k=N+1}^{N_2-1} \lambda
_k\;. \label{epsilon 3}\end{aligned}$$ From Eq. (\[epsilon 1\]), we obtain $$\sum _{k=1}^{N_1} \lambda _k + \sum _{k=N_1 + 1}^{N_2 -2} (\lambda _k
- \lambda _{N_2} ) \le
\sum _{k=1}^{N_1} \mu _k\;.$$ Thus, for the Eq.(\[epsilon 3\]) may be bounded by $$\begin{aligned}
& \quad & \sum _{k=1}^N \mu' _k - \sum _{k=1}^N \lambda _k \\
&\ge &\frac{N_2 - N -1 }{N_2 - N_1 -1} \left[ \sum _{k= N_1
+1}^{N_2 -2} (\lambda _k - \lambda _{N_2 -1}) - \sum _{k= N_1 +
1}^{N} \lambda _k \right] + \frac{N - N_1}{N_2 - N_1 -1} \sum _{k=
N+1}^{N_2 -1}
\lambda _k. \\
&=& \sum _{k= N+1}^{N_2 -2} \lambda _k - (N_2 -N -2 )\lambda
_{N_2 -1} \\
&\ge& 0.\end{aligned}$$ Thus, for $N_1 +1 \le N \le N_2 -1$, we have $\sum _{k=1}^N \lambda _k
\le
\sum _{k=1}^N {\mu '}_k$. Finally, for any natural number $N$, we obtain $$\label{}
\sum _{k=1}^N \lambda _k \le \sum _{k=1}^N {\mu '}_k \le \sum
_{k=1}^N \mu _k\;.$$ We define ${{\left \vert}\Phi' {\right \rangle}}$ by using Schmidt coefficients $\{ \mu' _i
\}_{i=1}^{\infty}$. Then, since $\mu'_k = \lambda _k$ for all $k
\ge N_2(\epsilon )$, we can convert ${{\left \vert}\Psi {\right \rangle}} $ to ${{\left \vert}\Phi' {\right \rangle}}$ by means of the LOCC operation which is derived by the original (finite dimensional) Nielsen’s theorem [@majorization]. Moreover, since $\mu' _k = \mu _k$ for all $k \le N_1(\epsilon )$, ${{\left \vert}\Phi' {\right \rangle}}$ satisfies $\|
{{\left \vert}\Phi' {\right \rangle}}{{\left \langle}\Phi' {\right \vert}} - {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}} \|_{\rm tr} <
\epsilon$. Therefore, for any neighborhood of ${{\left \vert}\Phi {\right \rangle}}$, we can find a ${{\left \vert}\Phi' {\right \rangle}}$ such that ${{\left \vert}\Psi {\right \rangle}} \rightarrow {{\left \vert}\Phi' {\right \rangle}}$, where ${{\left \vert}\Phi' {\right \rangle}}$ is defined by using Schmidt coefficients $\{
\mu' _i \}_{i=1}^{\infty}$.
$\square$
\[Proof of Vidal\] In this appendix, based on Appendix A, B and C, we prove Vidal’s [@vidal] theorem infinite dimensional systems.
If part: The proof of Vidal’s theorem in [@D'ariano] is most suitable to extend this part to infinite-dimensional systems. Suppose $\lambda \prec ^{\omega} p \mu$, then $\{ \nu _i \}
_{i=1}^{\infty}$ defined by the condition $\nu _1 = 1-p(1-\mu_1)$ and $\nu _i = p \mu _i$ for $i \neq 1$ satisfies the conditions $\lambda \prec \nu$ and $\nu \le \mu$. If we define the state ${{\left \vert}\Omega {\right \rangle}}$ as the state whose Schmidt coefficients are $\{
\nu _i \}$ and whose Schmidt basis is same as ${{\left \vert}\Phi {\right \rangle}}$’s, then, by Nielsen’s theorem in infinite-dimensional systems, for any small $\epsilon >0$, ${{\left \vert}\Psi {\right \rangle}}$ can be transformed to ${{\left \vert}\Omega' {\right \rangle}}$ by LOCC with certainty, where $\|
{{\left \vert}\Omega' {\right \rangle}}{{\left \langle}\Omega' {\right \vert}} - {{\left \vert}\Omega {\right \rangle}}{{\left \langle}\Omega {\right \vert}} \| \le
\epsilon $. Then, ${{\left \vert}\Omega' {\right \rangle}}$ can also be transformed to ${{\left \vert}\Phi' {\right \rangle}}$ by the local measurement $E = \sum _{i=1}^{\infty}
\sqrt{\frac{\nu _i}{\mu _i}} {{\left \vert}i {\right \rangle}} {{\left \langle}i {\right \vert}}$ with probability $p$, where $\| {{\left \vert}\Phi' {\right \rangle}}{{\left \langle}\Phi' {\right \vert}} - {{\left \vert}\Phi {\right \rangle}}{{\left \langle}\Phi {\right \vert}} \|
_{\rm tr} \le \epsilon $.
Only If part: At first, since the first half of theorem 2 of [@vidal] is directly extended to infinite-dimensional systems, Vidal’s monotone $E_n({{\left \vert}\Psi {\right \rangle}}) = \sum _{i=n}^{\infty} \lambda
_i$ is a monotonic function of LOCC in mean value. In other words, if ${{\left \vert}\Psi {\right \rangle}}$ can be transformed to ${{\left \vert}\Phi _i {\right \rangle}}$ with probability $p_i$ in LOCC, then $E_n({{\left \vert}\Psi {\right \rangle}}) \ge \sum
_{i=1}^{\infty} p_i E_n({{\left \vert}\Phi _i {\right \rangle}})$.
If for a set of ${{\left \vert}\Psi {\right \rangle}}$ and ${{\left \vert}\Phi {\right \rangle}}$, and any $\epsilon
_1 > 0$, there exists a ${{\left \vert}\Phi' {\right \rangle}}$ such that $\|
{{\left \vert}\Phi' {\right \rangle}} {{\left \langle}\Phi' {\right \vert}} - {{\left \vert}\Phi {\right \rangle}} {{\left \langle}\Phi {\right \vert}} \|_{\rm tr} <
\epsilon _1$ and ${{\left \vert}\Psi {\right \rangle}}$ can be converted to ${{\left \vert}\Phi' {\right \rangle}}$ by some SLOCC with probability $p'$ which satisfies $p' \ge p$ and $\lambda \prec ^{\omega} p\mu$ is not true, that is for some $k_1$ $E_{k_1} ({{\left \vert}\Psi {\right \rangle}}) < p E_{k_1} ({{\left \vert}\Phi {\right \rangle}})$. Then, there exists a sequence ${{\left \vert}\Phi _n {\right \rangle}}$ and $p_n$ which satisfies $\lim _{n
\rightarrow \infty} {{\left \vert}\Phi _n {\right \rangle}} = {{\left \vert}\Phi {\right \rangle}}$ and $p_n \ge p$ for all $n
\in \mathbb{N}$. Moreover, there also exists a sequence of SLOCC operations which transforms ${{\left \vert}\Psi {\right \rangle}}$ to ${{\left \vert}\Phi _n {\right \rangle}}$. Then, by monotonicity of $E_k ({{\left \vert}\Psi {\right \rangle}})$ we have $E_k ({{\left \vert}\Psi {\right \rangle}})
\ge p_n E_k ({{\left \vert}\Phi _n {\right \rangle}})$ for all $k \in \mathbb{N}$. Since $1
- E_n ({{\left \vert}\Psi {\right \rangle}})$ is finite sum of eigenvalues of the reduced density operator, $E_n({{\left \vert}\Psi {\right \rangle}})$ is continuous in the whole Hilbert space and for all $n$. Thus, by taking the limit of the inequality, we have $E_k ({{\left \vert}\Psi {\right \rangle}}) \ge p E_k ({{\left \vert}\Phi {\right \rangle}})$ for all $k \in \mathbb{N}$. This is a contradiction. $\square$
[^1]: The major part of this work was done when MO was in Department of Physics, Graduate School of Science, The University of Tokyo.
|
---
abstract: 'We present the STAR measurement of transverse momentum spectra at mid-rapidity for identified particles in [$p+p$]{}collisions at [$\sqrt{s}=200$]{}GeV. These high statistics data are ideal for comparing to existing leading- and next-to-leading order (NLO) perturbative QCD calculations. Next-to-leading models have been successful in describing inclusive hadron production using parameterized fragmentation functions (FF) for quarks and gluons. However, in order to describe baryon spectra at NLO, knowledge of flavor separated FF is essential. Such FF have recently been parameterized using data by the OPAL experiment and allow for the first time to obtain good agreement between NLO and identified baryons from [$p+p$]{}collisions.'
author:
- |
Mark Heinz for the STAR Collaboration\
Yale University, Physics Department, WNSL,\
272 Whitney Ave, New Haven, CT 06520, USA
title: 'STAR identified particle measurements at high transverse momentum in p+p $\sqrt{s}=200$ GeV'
---
Introduction {#intro}
============
Perturbative QCD has proven to be successful in describing inclusive hadron production in elementary collisions. Within the theory’s range of applicability, calculations at next-to-leading order (NLO) have produced accurate predictions for transverse momentum spectra of inclusive hadrons at different energy scales [@Marco:SQM04]. With the new high statistics proton-proton data at [$\sqrt{s}=200$]{}GeV collected by STAR, we can now extend the study to identified baryons and mesons to [$p_{T}$]{}$\sim$9 GeV/c. Perturbative QCD calculation apply the factorization ansatz to calculate hadron production and rely on three ingredients. The first part are the non-perturbative parton distribution functions (PDF) which are obtained by parameterizations of deep inelastic scattering data. They describe quantitatively how the partons share momentum within a nucleus. The second part, which is perturbatively calculable, consists of the parton cross-section amplitude evaluated to LO or NLO using Feynman diagrams. The third part consists of the non-perturbative Fragmentation functions (FF) obtained from [$e^{+}+e^{-}$]{}collider data using quark-tagging algorithms. These parameterized functions are sufficiently well known for fragmenting light quarks, but less well known for fragmenting gluons and heavy quarks. Recently, Kniehl, Kramer and Pötter (KKP) have shown that FF are universal between [$e^{+}+e^{-}$]{}and [$p+p$]{}collisions [@KKP:01]. At leading-order, we compare to string fragmentation models such as PYTHIA to investigate the dependence between hadrons and underlying parton-parton interactions [@Pythia]. In the string fragmentation approach the production of baryons is intimately related to di-quark production from strings. They then combine with a quark to produce a baryon. In NLO calculations, the accuracy of a given baryon cross-section is based on the knowledge of that specific baryon fragmentation function (FF) extracted [$e^{+}+e^{-}$]{}collisions.
Data Analysis {#analysis}
=============
The present data were reconstructed using the STAR detector system which is described in more detail elsewhere [@STAR2]. The main detectors used in this analysis are the Time Projection Chamber (TPC) and the Time of Flight detector (TOF). A total of 14 million non-singly diffractive (NSD) events were triggered with the STAR beam-beam counters (BBC) requiring two coincident charged tracks at forward rapidity ($3.3 < |\eta| < 5.0$). By simulation it was determined that the trigger measured 87% of the 30.0$\pm$3.5mb NSD cross-section. The offline primary vertex reconstruction was 76% efficient which lead to a total usable event sample of $7\times10^{6}$ events. Protons and pions in this analysis were identified using the TOF detector at [$p_{T}$]{}below 2.5 GeV/c and the TPC using the relativistic rise dE/dx at higher [$p_{T}$]{}. Details of both methods are described in [@STAR:tof; @STAR:relrise]. At [$p_{T}$]{}$\sim
3$ GeV/c the pion dE/dx is about 10-20% higher than that of kaons and protons due to the relativistic rise, resulting in a few standard deviations of seperation. The strange particles were identified from their weak decay to charged daughter particles. The following decay channels and the corresponding anti-particles were analyzed: $\mathrm{K^{0}_{S}} \rightarrow \pi^{+} + \pi^{-}$ (b.r. 68.6%), $\Lambda \rightarrow p + \pi^{-}$(b.r. 63.9%) . Particle identification of the daughters was achieved by requiring the TPC-measured dE/dx to fall within the 3$\sigma$-bands of the theoretical Bethe-Bloch parameterizations. Further background in the invariant mass was removed by applying topological cuts to the decay geometry.
Results
=======
Comparison to next-to-leading order
-----------------------------------
In figure \[fig:nlo\] we compare our transverse momentum spectra to recent next-to-leading order calculations using two different fragmentation functions (FF). The previous ones were by Kniehl-Kramer-Poetter (KKP) for pions, kaons and protons and from deFlorian-Stratmann-Vogelsang (DSV) for [$\Lambda$ ]{}[@KKP:00; @DSV]. More recently Albino-Kramer-Kniehl (AKK) [@AKK] have published FF based on the light quark-flavor tagged data from the OPAL Collaboration [@OPAL:00]. Clearly, these newer parameterizations improve the description of the baryon data considerably. In order to achieve this agreement with the data, the initial gluon to [$\Lambda$ ]{}fragmentation function is determined by fixing it’s shape to that of the proton, and then varying the normalization for the best fit. A scaling factor of 3 with respect to the proton is necessary to achieve agreement with STAR data. However, this modified FF is then tested by comparing to the $\Lambda$ measurement from $p+\bar{p}$ at $\sqrt{s} = 630$ GeV and agrees well [@AKK].
Baryon to meson ratios vs [$p_{T}$]{}
-------------------------------------
In order to further investigate the sensitivity of the baryon spectra to the fragmentation of gluons, we used a leading-order (LO) Monte Carlo simulator, PYTHIA. PYTHIA 6.3 generates events by using $2\rightarrow2$ LO parton processes plus additional leading-log showers and multiple interactions. We define a “Gluon-jet" event as one where the underlying partons are g-g or g-q and a “Quark-jet" event one where the underlying partons are q-q. According to the model default settings the [$p+p$]{}events at our energy are dominated by gluon-jets (62%) with respect to quark-jets (38%). Figure \[fig:bmratio\] compares baryon-to-meson ratios to three different event types from PYTHIA. In both cases the overall ratio in the data is significantly larger at [$p_{T}$]{}$\sim$1-3 GeV/c than the PYTHIA result. In addition, this shows that pure gluon jet events will produce a larger baryon-to-meson ratio than quark jet events.
Transverse mass ([$m_{T}$]{}) scaling
-------------------------------------
Universal transverse mass scaling of particle spectra was previously seen in [$p+p$]{}collisions at lower ISR-energies [@Mtscaling]. We have compiled STAR identified particle spectra to investigate [$m_{T}$]{}-scaling. The particle spectra were arbitrarily normalized to pion spectra at [$m_{T}$]{}= 1 GeV. Interestingly we observe that a splitting occurs at $\sim$2 GeV and that the meson spectra are harder than the baryon spectra. We compared this result to PYTHIA simulations scaled in the same manner. We again observe that gluon jets will fragment very differently into baryons and mesons than quark jets. For gluon jets, there is a clear shape difference between baryons and mesons consistent with the data. For quark jets, the shape difference is modified by an additional dependency on the mass of the produced particle. This may be a further indication that we observe dominance of gluon jets in [$p+p$]{}collisions at RHIC energies.
Summary
=======
We have shown that the theoretical description of identified baryons and mesons in [$p+p$]{}collisions has recently improved thanks to new NLO calculations using light quark-flavor tagged fragmentation functions. Considerable uncertainties remain in the high-z ($p_{hadron}/p_{parton}$) range of the gluon FF for baryons. It appears that previous baryon-FF extracted from [$e^{+}+e^{-}$]{}data are inconsistent with STAR’s [$p+p$]{}data, indicating that RHIC is a valuable test of FF. Arbitrarily scaled [$m_{T}$]{} spectra for strange particles exhibit partial [$m_{T}$]{} scaling and confirm the dominance of gluon jets in [$p+p$]{}and therefore the importance of understanding gluon fragmentation.
[99]{} Slides:\
`http://indico.cern.ch/contributionDisplay.py?contribId=42&sessionId=8&confId=9499` M. van Leeuwen for the STAR Collaboration, [*J. Phys. G: Nucl. Part. Phys.*]{} **31** (2005) S881
B. A. Kniehl, G. Kramer and B. Potter, [*Nucl. Phys.*]{} B [**597**]{}(2001) 337
T. Sjostrand and P. Z. Skands, [*Eur. Phys. J.*]{} C [**39**]{}, (2005) 129
K.H. Ackermann et al (STAR Collaboration), [*Nucl. Instrum. Meth.*]{} A[**499**]{} (2003) 624
B. A. Kniehl, G. Kramer and B. Potter, [*Nucl. Phys.*]{} B [**582**]{}(2000) 514
J. Adams et al. (STAR collaboration), [*Phys. Lett.*]{} B [**616**]{}, (2005) 8 M. Shao (STAR collaboration), nucl-ex/0505026
J. Adams et al. (STAR collaboration), [*Phys. Lett.*]{} B [**637**]{}, (2006) 161 B. Abelev et al. (STAR collaboration), [*Phys. Rev.*]{} C [**75**]{}, (2007) 064901
D. de Florian, M. Stratmann and W. Vogelsang, [*Phys. Rev.*]{} D [**57**]{}, (1998) 5811
S. Albino et al., [*Nucl. Phys.*]{} B[**734**]{}, (2006) 50
G. Abbiendi [*et al.*]{} \[OPAL Collaboration\], [*Eur. Phys. J.*]{} C [**16**]{}, (2000) 407
G. Gatof and C.Y. Wong [*Phys. Rev.*]{} D [**46**]{}, (1992) 997
|
---
abstract: 'We present a survey of diffuse emission in the interstellar medium obtained with the [*Far Ultraviolet Spectroscopic Explorer (FUSE).*]{} Spanning 5.5 years of [[*FUSE*]{}]{} observations, from launch through 2004 December, our data set consists of 2925 exposures along 183 sight lines, including all of those with previously-published detections. The data were processed using an implementation of CalFUSE v3.1 modified to optimize the signal-to-noise ratio and velocity scale of spectra from an aperture-filling source. Of our 183 sight lines, 73 show $\lambda 1032$ emission, 29 at $> 3 \sigma$ significance. Six of the $3 \sigma$ features have velocities $|v_{\rm LSR}| > 120$ [km s$^{-1}$]{}, while the others have $|v_{\rm LSR}| < 50$ [km s$^{-1}$]{}. Measured intensities range from 1800 to 9100 LU, with a median of 3300 LU. Combining our results with published absorption data, we find that an -bearing interface in the local ISM yields an electron density $n_{\rm e}$ = 0.2–0.3 cm$^{-3}$ and a path length of 0.1 pc, while -emitting regions associated with high-velocity clouds in the Galactic halo have densities an order of magnitude lower and path lengths two orders of magnitude longer. Though the intensities along these sight lines are similar, the emission is produced by gas with very different properties.'
author:
- 'W. Van Dyke Dixon and Ravi Sankrit'
- Birgit Otte
bibliography:
- 'apjmnemonic.bib'
- 'myref.bib'
- 'h2ref.bib'
- 'osix.bib'
title: 'An Extended [*FUSE*]{} Survey of Diffuse O VI Emission in the Interstellar Medium$^1$'
---
Introduction
============
For gas in collisional ionization equilibrium, emission via the 1031.93 and 1037.62 Å resonance lines of the lithium-like ion is the dominant cooling mechanism at temperatures of (1–5$)
\times 10^5$ K [@Sutherland:Dopita93]. Gas cools rapidly at these temperatures, so in the interstellar medium (ISM) traces regions in transition: hot gas cooling through temperatures of a few times $10^5$ K or interfaces between cool or warm gas ($T
= 10^2$–$10^4$ K) and hot gas ($T = 10^6$ K) where $10^5$ K gas can form [@Savage:95].
Absorption-line studies have begun to reveal the distribution of -bearing gas in the Galaxy. (For an excellent review, see @Savage:06.) absorption is detected in the spectra of UV-bright stars, QSOs, and AGNs [[e.g.]{}, @Wakker:03]. Measurements along sight lines through the Galactic halo indicate that the -bearing gas is roughly co-spatial with the thick disk, having a scale height of about 2.3 kpc [@Savage:03]. Within the thick disk, the distribution of is patchy and varies on small angular scales . Measurements towards stars in the disk indicate that the -bearing gas is extremely clumpy and cannot exist in uniform clouds [@Bowen:06]. These observations are consistent with the being formed in interfaces [@Savage:06].
Emission-line studies provide additional insight into the properties of transition-temperature gas. While absorption-line studies reveal the velocity distribution of -bearing gas along a line of sight, they are limited to sight lines with bright background sources. Emission-line observations can probe (and eventually map) the entire sky, but with lower spectral and spatial resolution. Particularly useful are measurements of absorption and emission along a single line of sight. The absorption is proportional to the density of the gas, while the emission is proportional to the square of the density. If the same gas is responsible for both absorption and emission, these measurements can be combined to derive the electron density in the plasma [@Shull:Slavin:94]. Assuming a gas temperature and oxygen abundance, one can derive the density and path length through the emitting gas.
Until recently, convincing detections of emission from the diffuse interstellar medium were limited to fewer than a dozen sight lines probed with the [*Far Ultraviolet Spectroscopic Explorer*]{} ([*FUSE;*]{} Table \[tab\_published\]). Reported intensities range from 1.6 to $3.3 \times 10^3$ LU. (One photon cm$^{-2}$ s$^{-1}$ sr$^{-1}$ or line unit corresponds to $1.9 \times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ at 1032 Å.) @Korpela:06 present a deep far-ultraviolet emission spectrum from a region of 15 radius centered on the north ecliptic pole obtained with the [*Spectroscopy of Plasma Evolution from Astrophysical Radiation (SPEAR)*]{} instrument. They report a combined $\lambda
\lambda 1032, 1038$ intensity of 5724 LU, slightly higher than the initial [[*FUSE*]{}]{} results. Using archival data from the first four years of the [[*FUSE*]{}]{} mission, @Otte:06 find measurable $\lambda 1032$ emission along 23 of 112 sight lines and conclude that their data are consistent with the picture derived from absorption surveys: high-latitude sight lines probe -emitting gas in a clumpy, thick disk, while low-latitude sight lines sample mixing layers and interfaces in the thin disk of the Galaxy. Unfortunately, the small size and low signal-to-noise (S/N) ratio of their sample (only 11 of their features are 3[$\sigma$]{}detections) limit their ability to constrain the properties of the emitting gas.
To better constrain the physical properties of -bearing gas in the ISM, we have conducted an extended [[*FUSE*]{}]{} survey of diffuse emission in the interstellar medium, using all [[*FUSE*]{}]{} data obtained through 2004 December and the latest version of the CalFUSE calibration pipeline. The results are presented in this paper, which is organized as follows: In §\[SEC\_OBSERVATIONS\], we discuss the [[*FUSE*]{}]{} instrument and our selection of survey sight lines from the [[*FUSE*]{}]{} archive. In §\[SEC\_DATA\], we describe our data-reduction techniques, with emphasis on our modifications to the standard CalFUSE pipeline. §\[SEC\_MEASUREMENTS\] describes our method for identifying emission features and measuring their parameters. We discuss our results in §\[SEC\_RESULTS\]. In §\[SEC\_DISCUSSION\], we combine emission and absorption data to derive the properties of -bearing gas in the Galactic disk and thick disk/halo. Results are summarized in §\[SEC\_SUMMARY\]. An appendix discusses the origin of the emission seen toward sight lines P12011 and B12901. Unless otherwise noted, all wavelengths in this paper are heliocentric, and all velocities are quoted relative to the local standard of rest (LSR). We use the kinematical LSR, in which the standard solar motion is 20 [km s$^{-1}$]{} towards $\alpha$ = 18h, $\delta$ = +30 (1900).
[lcccccc]{} A11701 & 57.6 & $+ 88.0 $ & $ 2.0\pm 0.6$ & $ 23 \pm 55 $ & $ -17 \pm 9 $ & 1\
S40548 & 95.4 & $+ 36.1 $ & $ 1.6\pm 0.3$ & $ 75 \pm 3 $ & $ -50 \pm30 $ & 2\
S40561 & 99.3 & $+ 43.3 $ &$\leq 1.6 $ & & & 2\
P11003 & 113.0 & $+ 70.7 $ & $ 2.6\pm 0.4$ & $ 75 $ & $ 10 $ & 3\
B00303 & 156.3 & $+ 57.8 $ & $ 3.3\pm 1.1$ & $210 $ & $ -51 \pm30 $ & 4\
B00302 & 162.7 & $+ 57.0 $ & $ 2.5\pm 0.7$ & $150 $ &$ -16 \pm22 $ & 4\
B12901 & 278.6 & $ -45.3 $ &$< 0.5 $ & & & 5\
A11703 & 284.0 & $+ 74.5 $ & $ 2.9\pm 0.7$ & $ < 80 $ & $ 84 \pm15 $ & 1\
I20509 & 315.0 & $ -41.3 $ & $ 2.9\pm 0.3$ & $160 $ & $ 64 $ & 6\
Observations\[sec\_observations\]
=================================
The [[*FUSE*]{}]{} instrument consists of four independent optical paths. Two employ LiF optical coatings and are sensitive to wavelengths from 990 to 1187 Å, and two use SiC coatings, which provide reflectivity to the Lyman limit. The four channels overlap between 990 and 1070 Å. Each channel possesses three apertures that simultaneously sample different parts of the sky. The low-resolution (LWRS) aperture is $30\arcsec\times30\arcsec$ in size. The medium-resolution (MDRS) aperture spans $4\arcsec\times20\arcsec$ and lies about $3\farcm5$ from the LWRS aperture. The high-resolution (HIRS) aperture lies midway between the MDRS and LWRS apertures and samples an area of $1\farcs25\times20\arcsec$. A fourth location, the reference point (RFPT), is offset from the HIRS aperture by 60$\arcsec$. When a star is placed at the reference point, all three apertures sample the background sky. For a complete description of [[*FUSE*]{}]{}, see @Moos:00 and @Sahnow:00.
The data sets used in our survey fall into three categories. First are the S405, S505, and Z907 programs. S405 and S505 represent background observations, some near [[*FUSE*]{}]{} targets (with the target at the RFPT), others with the LWRS aperture centered on the orbit pole. The Z907 program consists of extragalactic targets intended as background sources for absorption-line studies. When these targets are sufficiently faint, we include them in our sample. Second, we searched the for MDRS and HIRS observations of point sources obtained in time-tag mode, as their LWRS apertures should sample only background radiation. Third, we include all [[*FUSE*]{}]{} sight lines with previously-published detections of diffuse emission (Table \[tab\_published\]).
We exclude from our sample all sight lines probing known supernova remnants or planetary nebulae, since these structures do not represent the diffuse interstellar medium. Sight lines that probe the Magellanic Clouds (Sankrit et al., in preparation), the Coalsack Nebula [@Andersson:04], and the emission nebula around KPD 0005+5106 [@Otte:04] are presented elsewhere and are not included here. This sample is a superset of that presented by @Otte:06, which includes only data obtained though 2003 July.
We use only data from the LiF 1A channel in our analysis. Because its sensitivity at 1032 Å is more than twice that of any other channel, including data from other channels would reduce the signal-to-noise ratio of the resulting spectrum. We use only data obtained in time-tag mode, which preserves arrival time and pulse-height information for each photon event. This is the default observing mode for faint targets. Previous observers [[e.g.]{}, @Shelton:01; @Shelton:02; @Otte:03] have detected faint (presumably geocoronal) emission on the blue wing of the $\lambda 1032$ line in [[*FUSE*]{}]{} spectra obtained during orbital day, so we use only data obtained during orbital night. Individual exposures with less than 10 s of night exposure time are excluded from the survey, as are combined data sets with less than 1500 ks of total night exposure time or with significant continuum flux. Finally, we use only data obtained from launch (1999 June) through 2004 December.
Table \[tab\_data\] lists the 183 sight lines in our survey and the [[*FUSE*]{}]{} observations contributing to each. Each observation consists of many exposures; our sample contains 2925 exposures from 375 observations. Note that we combine data from multiple observations — and sometimes multiple science programs — that sample the same or nearby lines of sight. The orientation of the [[*FUSE*]{}]{} spacecraft is specified by four quantities: the right ascension and declination of the target, the aperture in which the target is centered, and the astronomical roll angle (east of north) of the spacecraft about the target (and thus the center of the target aperture). The roll angle is constrained by operational requirements and varies throughout the year. Because we combine data sets without regard to the original target aperture (HIRS, MDRS, LWRS, or RFPT) or roll angle, a sight line listed in Table \[tab\_data\] may represent data obtained from regions of the sky separated by up to 7 arcmin (twice the distance between the LWRS and MDRS apertures).
The distribution of survey sight lines on the sky is presented in [Fig. \[fig\_map\]]{}. The survey sample of @Otte:06 is concentrated in two quadrants, the northern sky with $0\degr < l < 180\degr$ and the southern sky with $180\degr < l < 360\degr$, due to observational constraints that reduced target availability in the orbit plane, which stretches across the Galactic center. Improvements in pointing control implemented later in the mission have partially filled in the other two quadrants, but the distribution remains skewed. Note especially the high concentration of targets in the region with $90\degr < l < 180\degr$ and $b > 30$. This 1/16 of the sky contains more than twice as many targets as any other region of equal area.
Data Reduction\[sec\_data\]
===========================
The data are reduced using an implementation of CalFUSE v3.1, the latest version of the [[*FUSE*]{}]{} data-reduction software package (Dixon et al., in preparation), optimized for a faint, diffuse source. Specifically, the pipeline is instructed to reject data obtained during orbital day. The first three modules of the pipeline (cf\_ttag\_init, cf\_convert\_to\_farf, and cf\_screen\_photons) are run as usual, then the file-header keywords SRC\_TYPE and APERTURE are set to EE and LWRS (to indicate an extended, emission-line source in the low-resolution aperture), respectively. Prematurely modifying these keywords confuses the screening routines, which can misinterpret a star drifting into the HIRS or MDRS aperture as a detector burst. The rest of the pipeline is run as usual, but with background subtraction turned off. Note that CalFUSE does not perform jitter correction, astigmatism correction, or optimal extraction on extended sources.
Apart from the above exceptions, we accept all of the default parameters defined by the pipeline. In particular, we accept all photon events with pulse heights in the range 2–25. To minimize the detector background, previous observers have imposed tighter pulse-height constraints, but we find that, because the mean pulse height of real photon events varies with time, reducing the range of allowed pulse heights can result in the rejection of real photon events for some time periods. We also accept the default spectral binning of 0.013 Å, or approximately two detector pixels. The pipeline operates on one exposure at a time, producing a flux- and wavelength-calibrated spectrum from each. Error bars are computed assuming Gaussian ($\sqrt{N}$) statistics.
The [[*FUSE*]{}]{} wavelength scale is nominally heliocentric, but nonlinearities in early versions of the CalFUSE wavelength solution forced previous observers to derive their wavelength scales from measurements of nearby airglow lines. The wavelength solution employed by CalFUSE v3.1 is far more accurate, but uncertainties in its zero point must be corrected by hand. We fit a synthetic emission feature (described below) to the Lyman $\beta$ airglow line of each extracted spectrum and compute the shift in pixels necessary to place the line at zero velocity in a geocentric reference frame. (Actually, we shift the Lyman $\beta$ line to $v_{\rm geo} = +0.5$ [km s$^{-1}$]{}, which places the $\lambda 1027$ line at rest. The fainter line is presumed to be less subject to detector effects that might skew its measured centroid.) Zero-point wavelength shifts are not random, but exhibit a periodicity on timescales of several hours. We take advantage of this fact for spectra with weak Lyman $\beta$ features: if the line contains fewer than 200 raw counts, we do not attempt a fit, but interpolate a shift from the values computed for the other exposures. The spectra are combined using the program cf\_combine, which shifts and sums the individual extracted spectral files.
Measurements\[sec\_measurements\]
=================================
Emission Line Profile
---------------------
@Shelton:01 adopt the observed profile of the $\lambda 1039$ airglow feature as the shape of an aperture-filling diffuse emission feature. We prefer to use a synthetic line profile, so fit this feature with a model emission line consisting of a top-hat function convolved with a Gaussian. The widths of both components are free parameters in the fit, as are the line intensity and centroid. Fits to 131 spectra with at least 500 counts in the line yield a best-fit width of $106.1 \pm 3.4$ [km s$^{-1}$]{} for the top hat and a FWHM = $25.4 \pm 5.4$ [km s$^{-1}$]{} for the Gaussian (where the error bars represent the standard deviation about the mean). Fits to the $\lambda 1027$ airglow line yield similar results. The top hat represents the projection of the LWRS aperture onto the detector, while the Gaussian represents the finite resolution of the instrument. For point sources, the [[*FUSE*]{}]{} resolution is between 15 and 20 [km s$^{-1}$]{}, so a value of 25 [km s$^{-1}$]{} for a diffuse source is reasonable. Note that 106 [km s$^{-1}$]{} corresponds to 28 spectral pixels.
Several previous observers have followed @DixonOVI:01 in adopting an unconvolved top hat for the shape of a diffuse emission feature, convolving it with a Gaussian to match the observed line profile, and reporting the FWHM of the Gaussian as the intrinsic width of the emission feature. By ignoring the instrumental contribution to the smoothing of the top-hat function, this technique over-estimates the intrinsic width of the emission profile. Fortunately, the error is small: 10% for a best-fit FWHM of 50 [km s$^{-1}$]{}, 1% for 150 [km s$^{-1}$]{}.
To construct our model line profiles, we convolve the 106 [km s$^{-1}$]{} top-hat function with a single Gaussian, representing the combined effects of the intrinsic emission profile and the instrumental line-spread function. To optimize the model resolution, we employ a grid of 0.013 Å pixels and, rather than binning the model to match the data, smooth it by convolving with a second top-hat function either 8 or 14 pixels wide. Each curve is normalized so that the best-fit scale factor reported by our fitting routine equals the line intensity in LU. We generate a series of curves with Gaussian FWHM values from 1 to 1000 [km s$^{-1}$]{}.
Detection of Emission
----------------------
Following @Martin:Bowyer:90, we use an automated routine to search each composite spectrum for a statistically-significant emission feature near 1032 Å. The calculation is performed using the WEIGHTS array, which is effectively raw counts for low count-rate data. (Dixon et al., in preparation, discuss the format of [[*FUSE*]{}]{} calibrated spectral files.) The mean value of the WEIGHTS array in the regions 1029–1030 and 1033.5–1036 Å is adopted as the local continuum. At each pixel between 1030 and 1034 Å, we bin the data by the width of an emission feature and determine the counts in excess of the mean. The significance of this excess is computed assuming Gaussian statistics. We repeat this process for bin widths from 25 to 30 pixels, or $\sim$ 94 to 113 [km s$^{-1}$]{}. From all combinations of central wavelength and line width, we select the most significant feature. If its significance is greater than $3 \sigma$, we record it as a detection. Table \[tab\_detections\] lists our detection sight lines, Table \[tab\_limits\] our non-detection sight lines. Our algorithm is designed to detect emission features that are $\sim$ 106 [km s$^{-1}$]{} in width; given our S/N, much narrower features are likely to be noise spikes, while much broader features are difficult to distinguish from the background.
Further analysis is performed on the flux-calibrated spectra, which are first binned to improve their signal-to-noise ratio. If the local continuum level (computed above) is greater than 1.5 raw counts per 0.013 Å pixel, we bin the spectrum by 8 pixels or 0.104 Å. A diffuse emission feature is about 3.5 binned pixels wide. If the continuum level is less than 1.5, we bin by 14 pixels or 0.182 Å, a value chosen so that a diffuse emission feature is spanned by 2 binned pixels. Most of our emission features are broad enough that 14-pixel binning provides sufficient spectral resolution; however, we find that emission features whose Gaussian components have FWHM values less than 100 [km s$^{-1}$]{} can be undersampled at this resolution, so we lower the threshold for 8-pixel binning to 0.5 raw counts per 0.013 Å pixel for spectra containing these narrow features. The binning applied to each of our spectra is listed in Table \[tab\_detections\]. After binning, the FLUX and ERROR arrays are converted from units of [erg cm$^{-2}$ s$^{-1}$ Å$^{-1}$]{} to LU pixel$^{-1}$, and the ERROR array is smoothed by 5 pixels, which removes zero-valued error bars without significantly changing its shape.
To derive the line parameters of each emission feature, we fit model spectra to the flux-calibrated data using the nonlinear curve-fitting program [[SPECFIT]{}]{} [@Kriss:94], which runs in the IRAF environment, to perform a $\chi^2$ minimization of the model parameters. Our synthetic line profiles are described above. The program interpolates between tabulated curves to reproduce the shape of the emission line. Free parameters in the fit are the level and slope of the continuum (assumed linear) and the intensity, wavelength, and Gaussian FWHM of the model line. For most sight lines, we model only the region 1028.7–1036.5 Å. We do not attempt to fit the 1038 Å feature, as it is only half as strong as the 1032 Å line in an optically-thin gas and is often blended with emission from interstellar \* $\lambda 1037.02$.
Best-fit values for the $\lambda 1032$ line parameters are reported in Table \[tab\_detections\], and plots of the spectra and best-fit models are presented in Figures \[fig\_3sigma\] and \[fig\_2sigma\]. In the table and figures, as throughout this paper, we distinguish between 3[$\sigma$]{} detections, for which $I/\sigma(I) \geq 3$, and 2[$\sigma$]{} detections, for which $I/\sigma(I) < 3$. All of these features meet our requirement of having an intensity greater than three times the uncertainty in the local continuum and are thus statistically significant; however, to derive the physical properties of the -bearing gas, we consider only those features whose intensities are certain at the 3[$\sigma$]{} level.
The FWHM values in Table \[tab\_detections\] include the smoothing imparted by the instrument optics; values less than $\sim$ 25 [km s$^{-1}$]{} indicate that the emission does not fill the LWRS aperture. In such cases, the surface brightness of the emitting region will be underestimated, because the conversion to LU assumes that the emission fills the aperture, and the velocity of the emitting gas will be uncertain, because it may not be centered in the aperture.
The quoted uncertainties are the error bars returned by [[SPECFIT]{}]{}, which are obtained from the error matrix and correspond to a 1[$\sigma$]{} confidence interval for a single interesting parameter. For multi-parameter fits with more than one interesting parameter, this method can underestimate the true uncertainty in each, so we have computed more rigorous error bars for sight line I20509 (for which the S/N is high and the $\lambda 1032$ line is broad) in the following way: For each model parameter, we begin with the best-fit value, then increase it, while re-optimizing the other model parameters, until $\chi^2$ increases by 1.0 [@Press:88]. We find that the two methods yield identical error bars, suggesting that the [[SPECFIT]{}]{} errors are sufficient for well-sampled spectra.
Both the [[*FUSE*]{}]{} flux calibration, which is based on theoretical models of white-dwarf stellar atmospheres, and the solid angle of the LWRS aperture are known to within about 10% [@Sahnow:00]. Added in quadrature, they contribute a systematic uncertainty of $\sim$ 14% [@Shelton:01] in addition to the statistical uncertainties quoted in Table \[tab\_detections\].
Our best-fit line intensities are lower limits, in the sense that the intrinsic intensity may be higher than is observed. We discuss the effects of dust extinction in §\[sec\_extinction\]. Resonance scattering within the -bearing gas can also have a strong effect on the observed intensity. For an optically thin gas, the intensity ratio $I_{1032} / I_{1038} = 2$, whereas an optically thick plasma yields a ratio of unity. Since the $\lambda 1038$ line is generally difficult to measure (as mentioned above) and its intensity has a large uncertainty, the resulting line ratio is usually inconclusive. We therefore do not attempt to estimate the self absorption along our detection sight lines.
Because molecular hydrogen is ubiquitous in the ISM, we must consider the effects of [H$_2$]{} absorption and emission on our results. The [H$_2$]{} features nearest $\lambda 1032$ are the Lyman (6,0) $P(3)$ and $R(4)$ lines at 1031.19 and 1032.36 Å, respectively, but these features are weak in cold ($\sim$ 100 K), diffuse clouds [@Shull:00] and will not significantly reduce the observed intensity of the $\lambda 1032$ feature. Another possibility is that fluorescent [H$_2$]{} contributes to the observed emission. Following @Shelton:01, we search for the Werner (0,1) $P(3)$ $\lambda 1058.82$ line, which for cool clouds bathed in ultraviolet light should be at least 50% brighter than any of the [H$_2$]{} emission lines between 1030 and 1040 Å [@Sternberg:89], but none of our 3[$\sigma$]{} sight lines exhibits a statistically significant emission feature near 1059 Å.
Upper Limits on Emission
-------------------------
For non-detection sight lines, we compute 3[$\sigma$]{} upper limits to the intensity of an emission feature using the mean value of the FLUX array between 1030 and 1035 Å and assuming a line width of 28 pixels (106 [km s$^{-1}$]{}). The resulting limits are given in Table \[tab\_limits\]. To obtain more data points with long integration times, we repeat the exercise using our detection sight lines, but masking out the line when computing the continuum level. Figure \[fig\_limits\] presents both sets of limits as a function of exposure time. [[*FUSE*]{}]{} can detect diffuse emission as faint as 2000 LU in 18 ks of night exposure time and as faint as 1000 LU in $\sim$ 80 ks of night time.
Comparison with Previously-Published Sight Lines\[sec\_published\]
------------------------------------------------------------------
It is instructive to compare our results with the previously-published values listed in Table \[tab\_published\]. Four of the published sight lines, A11701, B00303, B00302, and P11003, fail one or more of our statistical filters. A11701 yields only an upper limit on the intensity, while the other three produce 2[$\sigma$]{} detections. In three of these four cases, the original analysis used both day and night data, which explains our lower S/N ratios. Similarly, CalFUSE rejects most of the night exposure time for sight line S40561 due to limb-angle violations, raising our upper limit on the intensity. For sight line S40548, we obtain the published line intensity if we fix the Gaussian line width to be 75 [km s$^{-1}$]{}, but find the best-fit intensity and Gaussian FWHM to be a factor of two larger. Sight line B12901 is discussed in § \[sec\_B12901\]. The remaining sight lines, A11703 and I20509, yield 3[$\sigma$]{} detections in our survey. In both cases, our intensities and derived FHWM values agree (within the errors) with the previously-published values. In neither case do our LSR velocities agree, illustrating the difficulty of deriving an accurate wavelength scale from nearby airglow features, as previous authors were required to do.
The results of @Otte:06 are not included in Table \[tab\_published\]. Comparison with their results is complicated by the fact that several of their sight lines appear in our survey with additional exposure time or combined with nearby lines of sight. Even so, when they report a 3[$\sigma$]{} detection, their best-fit line parameters generally agree with ours within the quoted errors. The recent [[*SPEAR*]{}]{} results are discussed in §\[sec\_emission\].
Results\[sec\_results\]
=======================
Measured $\lambda 1032$ intensities and upper limits for our survey sight lines are presented in [Fig. \[fig\_limits\]]{}. emission is detected at 3[$\sigma$]{} significance along 29 lines of sight. Measured intensities range from 1800 to 9100 LU, with a median of 3300 LU, an average of 3900 LU, and a standard deviation 1900 LU. An additional 44 sight lines exhibit $\lambda 1032$ emission at lower significance, while 110 non-detection sight lines provide 3[$\sigma$]{} upper limits on the intensity, 35 of them less than 2000 LU. The median value of all the 3[$\sigma$]{} limits is 2600 LU. The upper limits are strongly correlated with exposure time and generally lower than the measured intensities along detection sight lines with comparable exposure time. For our complete sample, the detection rate (at 3[$\sigma$]{} significance) is 16%. If we consider only sight lines with exposure times greater than 18 ks (for which 3[$\sigma$]{} upper limits are less than 2000 LU), the detection rate rises to 30%.
Dust Extinction\[sec\_extinction\]
----------------------------------
UV emission is strongly attenuated by interstellar extinction. A color excess $E(B-V)$ of 0.05 magnitudes reduces the flux at 1032 Å by a factor of 2, and a color excess of 0.18 magnitudes by a factor of 10 [@Fitzpatrick:99]. Thus, when we detect emission in a direction with high extinction, we assume that the emission arises in the local ISM, [i.e.]{}, closer than the dust causing most of the extinction. On the other hand, emission detected in directions of low extinction does not necessarily arise in the distant ISM, but may come from relatively nearby gas. Figure \[fig\_ebv\_sinb\] presents a plot of color excess versus $\sin |b|$, where $b$ is the Galactic latitude, for each sight line in our survey. We see that extinction falls rapidly as one moves away from the Galactic plane. We caution that interstellar extinction is variable on small spatial scales, and the @Schlegel:98 values presented in [Fig. \[fig\_ebv\_sinb\]]{} are based on [[*IRAS*]{}]{} maps with low spatial resolution. An example of an detection through a region of patchy extinction is discussed in § \[sec\_B12901\].
Emission-Line Intensities\[sec\_emission\]
-------------------------------------------
In [Fig. \[fig\_intensity\_sinb\]]{}, we plot intensity against $\sin |b|$, where $b$ is the Galactic latitude of the observed sight line. The emission features plotted as open circles have absolute velocities greater than 120 [km s$^{-1}$]{}. We discuss these sight lines and their possible relationship with high-velocity clouds (HVCs) in § \[sec\_velocity\]. Of the 23 low-velocity sight lines, two have intensities greater than 8000 LU, while the others range from 1800 to 5500 LU. Both high-intensity sight lines probe regions known to be energized by hot stars and supernova remnants. P12011 (discussed in § \[sec\_Jupiter\]) intersects the Monogem Ring, which is about 800 pc away, and S50508 probes the outskirts of the Vela supernova remnant (250 pc distant) and the Gum Nebula, which probably lies somewhat beyond Vela.
Based on observations with [[*SPEAR*]{}]{}, @Korpela:06 report a combined $\lambda \lambda 1032, 1038$ intensity of $5724 \pm 570$ (statistical) $\pm\, 1100$ (systematic) LU for the region of 15 radius centered on the north ecliptic pole ($l = 123,\; b = +29\degr$). The grey cross in [Fig. \[fig\_intensity\_sinb\]]{} represents the [[*SPEAR*]{}]{} sight line. Its intensity is consistent with the [[*FUSE*]{}]{} data points in this latitude range. Two of our 3[$\sigma$]{} detections and one of our strong upper limits lie within the region sampled by [[*SPEAR*]{}]{}: P10429 ($I(1032) = 2900 \pm 800$ LU), Z90715 ($I(1032) = 4400 \pm 1400$ LU), and D11701 ($I(1032) < 1400$ LU).
Restricting our attention to the low-velocity, low-intensity measurements plotted in [Fig. \[fig\_intensity\_sinb\]]{}, we note that the sight lines at high latitude tend to be fainter than average. The median intensity for this sample of 21 detections is 3500 LU, with a 95% confidence interval of (2700, 4500) LU. The median intensity for sight lines with $\sin |b| < 0.7$ is 4100 LU with confidence interval (3100, 4400) LU. For sight lines with $\sin |b| >
0.7$, these values are 2200 LU, (1800, 3700) LU. The median estimates for the two groups are quite different, but the 95% confidence intervals overlap. The overlap is driven by a single high-latitude point (A11703, at 3700 LU), which determines the upper limit of the confidence interval for its group. Although the equality of medians cannot be statistically excluded (at the 95% level), the data suggest that emission tends to be fainter at high than at low latitudes.
A variation in the intensity with latitude might result from either of two competing effects: First, interstellar extinction falls steeply as one moves off the Galactic plane ([Fig. \[fig\_ebv\_sinb\]]{}), so attenuation is lower at high latitudes. Second, the path length through the Galactic disk scales inversely with $\sin |b|$, so high-latitude sight lines may intersect fewer -bearing regions. We have calculated the relative importance of these effects for a uniform distribution of -bearing clouds in the disk, assuming that the extinction is local. At low latitudes ($\sin |b| < 0.1$), attenuation dominates completely, and the observed emission must come from nearby regions. Intensities rise gradually for $0.1 < \sin |b| < 0.4$, as the extinction decreases faster than the path length. At high latitudes, the two effects nearly cancel, and intensities are roughly constant for $\sin |b| > 0.4$. Models assuming an exponential distribution of the -bearing clouds yield similar results. Thus, the low intensities seen along sight lines with $\sin |b| > 0.7$ in [Fig. \[fig\_intensity\_sinb\]]{} cannot be explained only by differences in path length. We suggest that the -emitting regions at high latitudes are intrinsically fainter than those at low latitudes. The faint regions likely constitute a population in the thick disk or halo (perhaps with properties similar to those of the HVCs located in the same region of [Fig. \[fig\_intensity\_sinb\]]{}), while the brighter regions lie in the disk of the Galaxy. It is possible that the detected along mid-latitude sight lines includes both disk and thick-disk emission, but for the lowest-latitude sight lines, the emitting regions necessarily lie in the disk.
Emission-Line Velocities\[sec\_velocity\]
------------------------------------------
The six 3[$\sigma$]{} detections plotted as open circles in [Fig. \[fig\_intensity\_sinb\]]{} have absolute velocities greater than 120 [km s$^{-1}$]{}, corresponding to the velocities of HVCs. absorption associated with high-velocity clouds was first detected by @Sembach:00 and @Murphy:00. @Savage:03 report that, when a known HVC is present along a line of sight to an object, high-velocity absorption spanning the approximate velocity range of the HVC is usually seen. The association of with HVCs suggests that the may be produced at interfaces between the clouds and hot, low-density gas in the Galactic corona or Local Group.
Two of our high-velocity sight lines, B12901 and S40549, intersect known HVCs and share the clouds’ velocities. B12901 intersects the Magellanic Stream, while S40549 probes the HVC known as Complex C (see Fig. 16 of @Wakker:03 and Figs. 11 and 13 of @Sembach:03). These sight lines are discussed in §\[sec\_hvc\]. Sight line S40543, which is near S40549 on the sky, also intersects Complex C. The emission in S40543 has a high positive velocity, while Complex C has a high negative velocity; however, high-positive-velocity absorption is seen nearby [@Sembach:03]. Sight lines C06401 and S30402 do not intersect HVCs. P10414 and S30402 probe low-latitude sight lines with high extinction, so their emission probably originates in the nearby disk (but see the discussion of patchy extinction in § \[sec\_B12901\]). Perhaps they sample fast-moving gas in previously undetected supernova remnants. (Note that the [H$\alpha$]{} intensity along S30402 is unusually high.)
In [Fig. \[fig\_velocity\]]{}, the LSR velocities of our low-velocity 3[$\sigma$]{} emission features are plotted against $\sin |b|$. The grey bars represent the range of LSR velocities predicted for each sight line by a simple model of Galactic rotation. Assuming a differentially-rotating halo with a constant velocity of $v = 220$ [km s$^{-1}$]{}, we compute the expected radial velocity as a function of distance for the first 5 kpc along each line of sight. The model ignores broadening due to turbulence or any other motion. The three sight lines with $\sin |b| > 0.5$ and $v_{\rm LSR} < -40$ [km s$^{-1}$]{} may probe intermediate-velocity clouds. The velocities of the remaining sight lines are generally consistent with Galactic rotation.
Emission-Line Widths\[sec\_FWHM\]
----------------------------------
Seven of our 3[$\sigma$]{} sight lines have best-fit Gaussian FWHM values less than 25 [km s$^{-1}$]{}, which indicates that their -emitting regions do not fill the $30\arcsec \times 30\arcsec$ LWRS aperture. Of the seven, C07601 and S50504 include data from lines of sight separated by an arc minute or more. The other five each sample a single line of sight. The data set with the longest exposure time, and thus the highest S/N, is S50510. We fit its spectrum with a model similar to that used to parameterize the $\lambda 1039$ airglow line: a top-hat function convolved with a Gaussian. The width of the top hat, together with the line intensity and centroid, are free parameters in the fit, but the Gaussian FWHM is fixed at 15 [km s$^{-1}$]{}, the instrumental resolution for a point source. (If allowed to vary freely, the Gaussian FWHM falls below 1 [km s$^{-1}$]{}.) For numerical simplicity, we fit the raw-counts spectrum, first binning the data by four pixels, which the raises the background to $\sim$ 10 counts per binned pixel. The width of the best-fit top-hat function is $80 \pm 15$ [km s$^{-1}$]{}, significantly less than the 106 [km s$^{-1}$]{} width of an aperture-filling emission feature. By simple scaling, we derive an angular size of about 23 for the emitting region. Its spatial scale is distance dependent; at a distance of 1 kpc, 23 corresponds to about 0.1 pc.
Comparison with H$\alpha$ and Soft X-ray Emission\[sec\_sxr\]
-------------------------------------------------------------
Perusal of the H$\alpha$ map produced by the Wisconsin H-Alpha Mapper (WHAM) Northern Sky Survey [@Haffner:03] reveals that our detection sight lines probe a variety of environments: toward regions, filaments, and bubbles, as well as toward faint, featureless, ionized gas. For each of our sight lines, Tables \[tab\_detections\] and \[tab\_limits\] list the H$\alpha$ intensities (integrated over one degree on the sky and over the velocity range $-80$ [km s$^{-1}$]{} $< v <$ +80 [km s$^{-1}$]{}) measured by WHAM. We find no correlation between the and H$\alpha$ intensities.
One possible origin of the observed emission is hot gas cooling from temperatures of $10^6$ K or more. Gas at these temperatures is observable in the soft X-ray (SXR) regime. Figure \[fig\_sxrmap\] presents the distribution of our 3[$\sigma$]{} detections and strong upper limits on a map of the 1/4 keV X-ray sky observed with [[*ROSAT*]{}]{} [@Snowden:97]. Prominent features include the North Polar Spur, which arches from $(l,b) = (30, +30)$ to $(290, +60)$ and may be probed by sight line A11703 (284, +75); the Vela supernova remnant at $(260, -5$), whose outer regions may be probed by sight line S50508 $(257, -4)$; and the Monogem Ring, a supernova remnant centered near $(205, +15)$, which may be probed by sight line P12011 $(194, +13)$. Tables \[tab\_detections\] and \[tab\_limits\] list the [[*ROSAT*]{}]{} 1/4 keV SXR emission observed toward each of our sight lines. We find no correlation between the and SXR intensities. In their absorption-line survey, @Savage:03 find no significant correlation between $N($) and either $I({\rm H}_{\alpha})$ or $I($SXR).
Properties of the -Bearing Gas\[sec\_discussion\]
=================================================
Measurements of emission and absorption can be combined to provide valuable diagnostics of the -bearing gas, so long as both probe the same interface or transition region. For a particular region, the intensity scales as $n_e^2 L$, where $n_e$ is the electron density and $L$ the path length through the region, while the column density scales as $n_e L$. From the ratio $I($$)/N($), we can derive the electron density of the plasma [@Shull:Slavin:94] and, assuming an oxygen abundance, the path length through it. This calculation assumes that the density of the region is uniform. Despite this simplification, our results should be correct to within an order of magnitude.
A typical sight line through the Galaxy could intersect several regions, each of which would contribute differently to the total column density and intensity. Because we lack the spectral resolution to isolate the contributions of individual regions, we must identify cases in which the integrated column density and intensity can be attributed to a single region. The likelihood that absorption and emission sight lines probe the same region is highest for nearby stars: at a distance of 100 pc, the 35 offset between the [[*FUSE*]{}]{} LWRS and MDRS apertures corresponds to $\sim$ 0.1 pc. Moreover, sight lines to nearby stars are more likely to intersect only a single interface or transition region than sight lines to distant objects. We discuss one such sight line in §\[sec\_white\_dwarfs\]. For more distant emitting regions, such as the HVCs discussed in §\[sec\_hvc\], we equate the absorbing and emitting gas based on their velocities and consider the range of reported column densities for the HVC.
-Bearing Gas in the Galactic Disk\[sec\_white\_dwarfs\]
-------------------------------------------------------
Four of our 3[$\sigma$]{} sight lines correspond to nearby white dwarfs in the absorption-line survey of @Savage:06. Their absorption-line measurements and our emission-line velocities are presented in Table \[tab\_lehner\]. (Note that the velocities in Table \[tab\_lehner\] are heliocentric.) The best case in our sample for combining emission and absorption measurements is sight line P20411. The velocity of its emission ($v_{\rm Helio} = -6 \pm 15$ [km s$^{-1}$]{}) is consistent with that of the absorption [$v_{\rm Helio} = -3.8\pm3.6$ [km s$^{-1}$]{}; @Savage:06] measured toward WD0004$+$330 (GD 2), a DA white dwarf at a distance of 97 pc [@Vennes:97]. The relative narrowness of the emission feature (the best-fit value of the intrinsic Gaussian FWHM is $60 \pm 40$ [km s$^{-1}$]{}) suggests that it probes a single emitting region. P20411 was observed with the MDRS aperture centered on the star, so the LWRS aperture probes a sight line passing within $\sim$ 0.1 pc of the white dwarf.
The spectrum of GD 2 shows absorption from molecular hydrogen with a column density $\log N($[H$_2$]{}) = 14.46 [cm$^{-2}$; @Lehner:03]. Since [H$_2$]{} is assumed to form on dust grains, we must consider the possibility of dust extinction along this line of sight. To estimate the extinction toward this star, we compare it with HZ 43, another nearby DA white dwarf that shows no [H$_2$]{} absorption. @Finley:97 derive effective temperatures of 49,360 and 50,822 K for GD 2 and HZ 43, respectively. If their temperatures are nearly equal, then any difference in their colors is likely due to reddening toward GD 2. These colors are $-0.29$ and $-0.31$ magnitudes for GD 2 and HZ 43, respectively [@Eggen:68; @Bohlin:01], meaning that [$E($)]{} = 0.02 toward GD 2, which corresponds to an attenuation of 30% at 1032 Å [@Fitzpatrick:99]. More recent analyses suggest that GD 2 is somewhat cooler than HZ 43 (for example, @Barstow:03 derive temperatures of 45,460 and 46,196 K from the star’s Balmer and Lyman lines, respectively), which would explain the color difference without invoking dust, so we will treat this reddening as an upper limit.
The observed $\lambda$1032 intensity is 2100 LU and the absorbing column is $N$() = $6.2 \times 10^{12}$ cm$^{-2}$. We assume that the intrinsic intensity of the $\lambda$1038 emission is half that of the 1032Å line, as would be expected if the gas were optically thin. Assuming a temperature of $2.8\times10^5$K, Equation 5 from @Shull:Slavin:94 yields an electron density $n_{\rm e}=0.22$ cm$^{-3}$ if [$E($)]{} = 0.00 and $n_{\rm e}=0.29$ cm$^{-3}$ if [$E($)]{} = 0.02. The absorption line has a Doppler parameter $b \sim 30$ [km s$^{-1}$]{}; if thermal, it implies a temperature of $4.2 \times 10^5$ K, which does not change the derived electron density.
To calculate the density and the path length through the emitting region, we need the oxygen abundance and the fraction of oxygen in O$^{+5}$. @Oliveira:05 derive a mean O/H ratio for the Local Bubble of $(3.45 \pm 0.19) \times 10^{-4}$. For plasmas in collisional ionization equilibrium, the O$^{+5}$ fraction peaks at 22% when the gas temperature is $2.8 \times 10^5$ K [@Sutherland:Dopita93]. With these values, and assuming that the gas is completely ionized ($n_{\rm e} = 1.2 n_{\rm H}$), we derive an density of $1.4 \times 10^{-5}$ cm$^{-3}$ and a path length through the gas of $\sim$ $4.4 \times 10^{17}$ cm or 0.14 pc for zero reddening. If [$E($)]{} = 0.02, the density and the path length through the gas become $1.8 \times 10^{-5}$ cm$^{-3}$ and 0.11 pc, respectively.
@Savage:06 report that the absorption feature in the spectrum of GD 2 is well detected (4.8[$\sigma$]{}) and closely aligned in velocity with the and absorption, consistent with formation in a condensing interface between the cool gas traced by the and and a hot exterior gas. @Bohringer:87 model conductive interfaces around spherical interstellar clouds embedded in a hot interstellar medium. One of their models (model H) assumes a cloud of radius $3 \times 10^{17}$ cm in an external medium of temperature $5 \times 10^5$ K. The model predicts a particle density at the cloud surface of $n_0 = 0.73$ cm$^{-3}$ (our mean cloud density is $n = 0.40$ cm$^{-3}$), a column density through the interface region of $N$() = $3.2 \times 10^{12}$ cm$^{-2}$, and a mean temperature for the O$^{+5}$ ions of $4.7 \times 10^5$ K. These predictions are within a factor of 2 of our derived cloud parameters. The high temperature is a general feature of the @Bohringer:87 models.
Another white-dwarf sight line worthy of comment is P10411 (WD0455$-$282). While the velocities of its principal emission and absorption components disagree, @Holberg:98 report and absorption in [[*IUE*]{}]{} spectra of this star at $v_{\rm Helio} = 16.21 \pm 2.66$ [km s$^{-1}$]{}, and @Savage:06 report weak absorption at the same velocity. @Holberg:98 argue that this absorption is circumstellar, rather than interstellar. If so, the emission that we observe at $v_{\rm Helio} = 7 \pm 9$ [km s$^{-1}$]{} may come from this circumstellar material. With an effective temperature $T_{\rm eff}$ = 57,200 K, the white dwarf is too cool to produce O$^{+5}$ through photoionization, so the emission must be powered by shocks in the circumstellar material, perhaps generated by the interaction of material from previous episodes of mass loss. This mechanism is at work in planetary nebulae [@Villaver:02], and high-ionization lines have been observed in the spectra of low-$T_{\rm eff}$ central stars of planetary nebulae (J. Herald, private communication).
-Bearing Gas in High-Velocity Clouds\[sec\_hvc\]
------------------------------------------------
Two of our high-velocity sight lines, B12901 and S40549, intersect known HVCs and share the clouds’ velocities (§ \[sec\_velocity\]). Data set B12901 consists of spectra obtained along three closely-spaced sight lines that probe the Magellanic Stream, but only one of them, I2050501/I2050510, exhibits significant $\lambda 1032$ emission (§ \[sec\_B12901\]). Its LSR velocity is $206 \pm 13$ [km s$^{-1}$]{}, and its intensity is $3000 \pm 600$ LU (Table \[tab\_b12901\]). The column densities of HVCs associated with the positive-velocity portion of the Magellanic Stream range from $\log N$ = 13.78 to 14.33. Their velocities range from 183 to 330 [km s$^{-1}$]{} with a mean of 232 [km s$^{-1}$]{} [@Sembach:03]. Combining these absorption and emission measurements, we derive an electron density for the -bearing gas of 0.01–0.03 cm$^{-3}$. Adopting the LMC oxygen abundance [$2.24 \times 10^{-4}$ O atoms per H atom; @Russell:92], we find a path length through the emitting gas of 16–200 pc.
Sight line S40549, with $v_{\rm LSR} = -172 \pm 9$ [km s$^{-1}$]{} and $I(1032) = 8200 \pm 2300$ LU, probes Complex C, a large assembly of high-velocity ($-170 \la v_{\rm LSR} \la -100$ [km s$^{-1}$]{}) in the northern Galactic sky between $l \sim 30\degr$ and $l \sim 150\degr$. Measured column densities for sight lines through Complex C range from $\log N$ = 13.67 to 14.22 [@Sembach:03]. From these values, we derive an electron density for the -bearing gas of 0.03–0.11 cm$^{-3}$. Assuming a Galactic O/H ratio of $6.61 \times 10^{-4}$ [@Allen:73], we find a path length through the emitting gas of 1.1–14 pc. The color excess toward S40549 is [$E($)]{} = 0.02 (Table \[tab\_detections\]), which attenuates emission at 1032 Å by 30% [@Fitzpatrick:99]. Correcting for this attenuation raises the electron density and reduces the path length through the emitting gas by the same factor. We expect the extinction along sight line B12901 to be similar, as the reddening in this region is patchy and quite low along nearby sight lines (§ \[sec\_B12901\]).
The emitting regions probed by sight lines B12901 and S40549 appear to span an order of magnitude in $n_e$ and two orders of magnitude in path length. While this spread may reflect real variations in the properties of thick disk/halo gas, we should note that S40549 is one of the shortest exposures in our sample. Its $\lambda 1032$ feature is both extremely narrow (FWHM = $2 \pm 20$ [km s$^{-1}$]{}) and unusually bright. Careful analysis confirms that this feature is statistically significant at the 3[$\sigma$]{} level; however, additional exposure time would be helpful to confirm this result.
Limits on the Gas Density\[sec\_heckman\]
-----------------------------------------
Along sight lines through the Galactic disk, most of the column densities measured to date lie in the range $\log N($$) = 12.6$ to 14.0, and along sight lines through the Galactic halo, in the range $\log N($$) = 13.7$ to 14.7 [@Savage:03]. Excluding emission associated with known SNRs and the short S40549 exposure discussed in § \[sec\_hvc\], our $\lambda 1032$ intensities also span a narrow range, from 1800 to 5500 LU. (The lower bound reflects our sensitivity limits.) The restricted range of observed column densities, together with the narrow range of our measured intensities, suggests that the volume densities of the -bearing gas are likewise limited. We have found two flavors of -bearing gas: narrow interfaces in the Galactic disk with densities of about 0.1 cm$^{-3}$ and more extended cooling regions in the Galactic halo with densities of about 0.01 cm$^{-3}$. We argue that, in general, the volume densities of -bearing gas in the Galactic disk and halo are unlikely to differ significantly from our derived values. Indeed, the observed range of intensities is consistent with a constant volume density in each environment, with column density as the only variable. In particular, our observations rule out the general presence of -bearing thermal interfaces or cooling regions with densities of 1 to 10 cm$^{-3}$ or greater.
Our observations do not rule out the presence of hot, low-density gas with significant column densities. @Heckman:02 have shown that, for radiatively cooling gas, the column density at a temperature of $2 \times 10^6$ K is comparable with that at $3 \times 10^5$ K because, though the fraction is smaller at the higher temperature, the cooling times are much longer. The emission from such regions would be too weak for detection by [[*FUSE*]{}]{} because their emission measure would be too low.
Summary\[sec\_summary\]
=======================
We have conducted a survey of diffuse $\lambda 1032$ emission in the Galaxy using archival data from the [*Far Ultraviolet Spectroscopic Explorer (FUSE).*]{} Of our 183 sight lines, 29 show emission at 3[$\sigma$]{} significance. Measured intensities range from 1800 to 9100 LU, with a median of 3300 LU. An additional 35 sight lines provide upper limits of 2000 LU or less. Though the presence of emission along low-latitude, high-extinction sight lines suggests that these emitting regions are nearby (probably within a few hundred parsecs), other emitting regions are more likely to be associated with HVCs in the Galactic halo.
Analysis of 21 low-velocity, low-intensity, 3[$\sigma$]{} emission features reveals that the -emitting regions at high latitudes are intrinsically fainter than those at low latitudes and may represent a distinct population of emitters. Line velocities are generally consistent with a simple model of Galactic rotation. Some of the -emitting regions appear to have angular sizes smaller than the [[*FUSE*]{}]{} LWRS aperture, which places a distance-dependent constraint on their physical size. By combining emission and absorption measurements through the same -bearing regions, we find evidence for relatively narrow, high-density conductive interfaces in the local ISM and more extended, low-density regions in HVCs. Based on the narrow range of intensities in our sample and of column densities in the Galactic disk and halo, we argue that the volume densities of -bearing regions in each environment are unlikely to differ significantly from our derived values.
The authors thank Ricardo Velez for his assistance with the initial data reduction for this project. We acknowledge with gratitude the ongoing efforts of the [[*FUSE*]{}]{} P.I. team to make this mission successful. R. S. thanks J. Cuervo for interesting discussions about statistical methods and for help with their implementation in STATA. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; the NASA Astrophysics Data System (ADS); and the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. This work is supported by NASA grant NAS5-32985 to the Johns Hopkins University.
Notes on Individual Lines of Sight
==================================
P12011 and P12012 (Jupiter)\[sec\_Jupiter\]
-------------------------------------------
Our sample includes two observations of Jupiter (P12011 and P12012) designed to search for HD fluorescently pumped by solar Lyman $\beta$ emission. In both cases, the MDRS aperture was centered on the planet, and the LWRS aperture was offset by 3.5 arcmin (or 10.3 Jupiter radii) in a direction perpendicular to the plane of the Jovian system. No emission is present in the P12012 spectrum ($\alpha$ = 07:06:57.55, $\delta$ = +22:24:10.1, J2000), but a 3[$\sigma$]{} feature is seen in P12011 ($\alpha$ = 07:06:36.64, $\delta$ = +22:24:32.0). The heliocentric velocity of the P12011 feature is $-0.2 \pm 25$ [km s$^{-1}$]{}, suggesting that the emission may have a local origin. If it were scattered solar emission, we would expect to see emission from $\lambda 977$ as well, but none is present in the SiC 2A spectrum from this observation. Another possibility is that the observed emission is due to [H$_2$]{} fluorescence, but comet spectra that show strong [H$_2$]{} fluorescence near 1032 Å also exhibit a strong 1163.7 Å line (P. D. Feldman 2005, private communication), which is not seen in the LiF 1B spectrum from this observation. We conclude that the observed is of interstellar origin.
B12901\[sec\_B12901\]
---------------------
The [[*FUSE*]{}]{} observations of sight line B12901 were originally designed as a shadowing experiment to search for emission arising in the Local Bubble [@Shelton:03]. This sight line intersects a diffuse interstellar filament at a distance of $230 \pm 30$ pc [@Penprase:98], well beyond the $\sim$ 100 pc radius of the Local Bubble [@Snowden:98]. As its mean color excess is $E(B-V) = 0.17 \pm 0.05$ [@Penprase:98], it was assumed that any emission would have to come from material closer than the filament, presumably the Local Bubble itself.
We detect an $\lambda 1032$ emission feature of some 3[$\sigma$]{} significance in this spectrum, but with a velocity $v_{\rm LSR} = 192 \pm 19$ [km s$^{-1}$]{}, the emission is unlikely to originate in either the Local Bubble or the intervening filament. $\lambda 1032$ absorption features in the spectra of nearby white dwarfs exhibit velocities $|v| < 40$ [km s$^{-1}$]{} [relative to the ISM as defined by the $\lambda 1036$ line; @Oegerle:05], and neutral gas in the filament moves at $0 < v_{\rm LSR} < 10$ [km s$^{-1}$]{} [@Penprase:98]. @Penprase:98 find that the filament is patchy, with $E(B-V) \leq 0.01$ toward some background stars. Maps of the high-velocity sky published by @Sembach:03 and @Wakker:03 reveal that sight line B12901 probes an HVC with $v_{\rm LSR} \sim 200$ [km s$^{-1}$]{} that is associated with the Magellanic Stream. We conclude that the observed emission is produced by -bearing gas associated with the HVC.
Data set B12901 consists of five observations (Table \[tab\_data\]) along three closely-spaced sight lines. Since the filament is patchy, we searched for emission along each sight line separately, using the technique described in § \[SEC\_MEASUREMENTS\]. Our results are presented in Table \[tab\_b12901\]: we find a strong $\lambda 1032$ feature in the combined I2050501/I2050510 spectrum and can set only upper limits on emission in the I2050601 and combined B1290101/B1290102 spectra. The two I205 sight lines are separated by approximately 30. The strong variation in intensity over such small angular scales is consistent with a patchy distribution of extinction in the filament.
In her original analysis, @Shelton:03 found no evidence for emission in these data and placed a 2[$\sigma$]{} upper limit of 530 LU on the intensity of the $\lambda 1032$ line. Excluding the observed $\lambda 1032$ emission feature, we derive a 2[$\sigma$]{} upper limit of 600 LU for the full B12901 data set. Shelton’s conclusions about the physical conditions within the Local Bubble are therefore unchanged by our result.
[llcccccc]{} P20411 & WD0004$+$330 & 97 &$-3.8 \pm 3.6 $ &$ 21.3 \pm 4.1 $ & $ 12.79 \pm 0.09$ && $-6 \pm 15$\
P10411 & WD0455$-$282 & 102 &$-23.6 \pm 4.6 $ &$ 30.1 \pm 7.4 $ & $ 13.42 \pm 0.07$ && $7 \pm \phn9$\
P10429 & WD1631$+$781& 67 &$-16.4 \pm 5.1 $ & & $ 12.52 \pm\,^{0.12}_{0.17} $ && $-64 \pm 40$\
P20422 & WD2004$-$605 & 58 &$-23.2 \pm 6.4 $ & & $ 13.00 \pm 0.10 $ && $-6 \pm \phn8$
[lccccccc]{} I2050501, I2050510 & 278.58 & $-45.31$ & 46386 & $3.0 \pm 0.6$ & $1032.69 \pm 0.05$ & $100 \pm 20$ & $206 \pm 13$\
I2050601 & 278.59 & $-45.30$ & 11120 & $< 2.2$ & & &\
B1290101, B1290102 & 278.63 & $-45.31$ & 26073 & $< 1.5$ & & &
[*Fig. 2. — Continued.*]{}
[*Fig. 2. — Continued.*]{}
[*Fig. 2. — Continued.*]{}
[*Fig. 3. — Continued.*]{}
[*Fig. 3. — Continued.*]{}
[*Fig. 3. — Continued.*]{}
[*Fig. 3. — Continued.*]{}
[*Fig. 3. — Continued.*]{}
|
---
abstract: 'We show how to compute a certain group $\H^2_{\ell}(G)$ of equivalence classes of invariant Drinfeld twists on the algebra of a finite group $G$ over a field $k$ of characteristic zero. This group is naturally isomorphic to the second lazy cohomology group $\H_{\ell}^2(\OO_k(G))$ of the Hopf algebra $\OO_k(G)$ of $k$-valued functions on $G$. When $k$ is algebraically closed, the answer involves the group of outer automorphisms of $G$ induced by conjugation in the group algebra as well as the set of all pairs $(A, b)$, where $A$ is an abelian normal subgroup of $G$ and $b : \widehat{A} \times \widehat{A} \to k^{\times}$ is a $k^\times$-valued $G$-invariant non-degenerate alternating bilinear form on the dual $\widehat{A}$. When the ground field $k$ is not algebraically closed, we use algebraic group techniques to reduce the computation of $\H_{\ell}^2(G)$ to a computation over the algebraic closure. As an application of our results, we compute $\H^2_{\ell}(G)$ for a number of groups.'
address: |
Université de Strasbourg & CNRS\
Institut de Recherche Mathématique Avancée\
7 Rue René Descartes\
67084 Strasbourg, France
author:
- Pierre Guillot and Christian Kassel
title: |
Cohomology of invariant Drinfeld twists\
on group algebras
---
Acknowledgements {#acknowledgements .unnumbered}
================
The present work is part of the project ANR BLAN07-3$_-$183390 “Groupes quantiques : techniques galoisiennes et d’intégration" funded by Agence Nationale de la Recherche, France.
We are grateful to Eli Aljadeff and Julien Bichon for several useful discussions.
One of the starting points of this work was a series of computer calculations, giving in particular $\H^2_\ell(A_4) = \ZZ/2$ and $\H^2_\ell(Q_8) = 1$ directly from the definition. Computers have also proved useful on several other occasions. We have relied exclusively on the open-source SAGE Computer Algebra Package, and would like to thank its creators for keeping this wonderful software freely available.
|
---
abstract: '[ The active galactic nuclei (AGN) are among the most powerful sources with an inherent, pronounced and random variation of brightness. The randomness of their time series is so subtle as to blur the border between aperiodic fluctuations and noisy oscillations. This poses challenges to analysing of such time series because neither visual inspection nor pre-exisitng methods can identify well oscillatory signals in them. Thus, there is a need for an objective method for periodicity detection. Here we review our a new data analysis method that combines a two-dimensional correlation (2D) of time series with the powerful methods of Gaussian processes. To demonstrate the utility of this technique, we apply it to two example problems which were not exploited enough: damped rednoised artificial time series mimicking AGN time series and newly published observed time series of changing look AGN (CL AGN) NGC 3516. The method successfully detected periodicities in both types of time series. Identified periodicity of $\sim 4$ yr in NGC 3516 allows us to speculate that if the thermal instability formed in its accretion disc (AD) on a time scale resembling detected periodicity then AD radius could be $\sim 0.0024$ pc. ]{}'
author:
- 'Andjelka B. Kova[č]{}evi[ć]{}'
- 'Luka [Č]{}. Popovi[ć]{}'
- 'Dragana Ili[ć]{}'
title: 'Two-dimensional correlation analysis of periodicity in active galactic nuclei time series'
---
Introduction
============
Active galactic nuclei (AGNs) vary on time-scales ranging from minutes and hours to years over the entire electromagnetic spectrum, with no apparent indications of periodicities. Nevertheless, in recent years an increasing number of reports on AGN periodicities have been published [see e.g. @2015MNRAS.453.1562G; @2015Natur.518...74G; @2016ApJ...833....6L; @2016MNRAS.463.2145C; @2018MNRAS.475.2051K; @doi.org/10.3847/1538-4365/ab0ec5; @10.3847/1538-4357/aaf731], suggesting that supermassive binary black holes might be detected through periodicity of their observed time series (i.e. light curves, [see @10.1088/1361-6382/ab0587 and references therein]).
Establishing a method for recurrent patterns detection in the AGN time series is an important step towards this goal. Many methods have been designed for estimating this periodicity [for an excellent review see @doi.org/10.1093/mnras/stt1206]. These methods share a number of commonalities, as well as differences. Most of them are some variant of Fourier analysis [@doi.org/10.3847/1538-4365/aab766] which has restrictive assumptons: equally spaced observations, the time series is stationary, homoscedastic Gaussian noise with purely periodic signals (i.e. sinusoidal shape). Wavelet analysis does not assume stationarity and is therefore able to detect amplitude and period changes over time. Usually in all Fourier based methods peaks which are indicating periodicity can overlap. However, the Fourier transform, the wavelet transform and related period estimation techniques can not tell about the presence of coordinated or independent changes among signals, as well as about relative directions of signal intensity variations. Our hybrid method based on two-dimensional (2D) correlation analysis were devised to deal with above issues [@2018MNRAS.475.2051K].
We aim to further illustrate the performance and application of this 2D hybrid method on synthetic data, where results can be judged carefully and also on observed data, where new insights can be gained. We present computations of 2D correlation maps of damped rednoised artificial time series and newly published long-term monitored time series of a changing look (CL) AGN NGC 3516 [@2019MNRAS.485.4790S]. There is some indication that better sampled AGN light curves can be modeled as damped harmonic oscillator perturbed by coloured noise [@doi.org/10.1093/mnras/stx1420]. CL AGNs are objects showing the dramatic variability of the emission line profiles and the change of classification type within very short time interval (from days to years). Periodical variability has been discussed for some well-known CL AGN such as NGC 4151 [see @doi.org/10.1088/0004-637X/759/2/118; @2018MNRAS.475.2051K], NGC 5548 [see @2016ApJS..225...29B; @2018MNRAS.475.2051K] within the context of supermassive binary black hole candidate and pointed out as possibility for typical CL AGN NGC 2617 [see @Okn18]. It would be interesting to know if CL AGN variability is periodic, since it can be a consequence of tidal disruption events [@doi.org/10.1093/mnras/stw2130] or recoiling supermassive black hole [@doi.org/10.3847/1538-4357/aac77d].
Data and Method
===============
To demonstrate the utility of this technique, we apply it to two example problems which were not exploited enough.
Interestingly,[@doi.org/10.1093/mnras/stx1420] made estimate that availability of better sampled data in the future will necessitate more sophisticated models for AGN light curves as it is damped harmonic oscillator perturbed by colored noise. Thus, we synthetised artificial damped sinusoid signal corrupted by red noise. Figure \[fig1\] (left panels) shows both damped sinusoid and normal sinusoid of period 125 arbitrary units \[a.u.\]. Normal sinusoid is symmetric with respect to time axis. But by introducing damped motion we brake this symmetry. Although the symmetry can be broken in ways that make it difficult to recognize or reconstruct periodicity, it is there nonetheless. And, of course, the more dramatic and complex the nature and magnitude of the damper, the more complex the task of identifying the original periodicity (symmetry). We perturbed both signals with red noise (see right panels) so that the original signal patterns are no longer able to be recognized. Precise mathematical description of red noise corruption of a signal is given in [@2018MNRAS.475.2051K]
![Synthetic light curves. A damped sinusiod (upper left) and its rednoise corrupted form (upper right); the same as above but for pure sinusoid (bottom left and right). Time and flux are given in arbitrary units. \[fig1\]](sinusoid2.pdf){width="0.9\linewidth"}
In a recent study of CL AGN NGC 3516 [@2019MNRAS.485.4790S], authors applied Lomb Scargle periodogram in order to detect periodicity in observed light curves of this object. However, potential periodic signals were of small significance. Thus, we applied our method to the long-term monitored continuum, H$\alpha$ and H$\beta$ fluxes covering 22 yr (from 1996 to 2018). The data are presented and carefully described in [@2019MNRAS.485.4790S] (see their Figure 5). Thus, we will not repeat the details here.
Here we recapitulate key aspects of our hybrid method, which is discussed in details in [@2018MNRAS.475.2051K]. A conversion of set of light curves to 2D correlation maps is relatively easy, providing rich information about the presence of coordinated or independent signals as well as relative directions of signal variations. Some notable features of our 2D hybrid method are: simplification of complex spectra consisting of many overlapped peaks, enhancement of apparent spectral resolution by spreading peaks over the second dimension, and establishment of direction of changes in signal through correlation coefficients. Some generic properties of correlation map are marked on Figure \[fig2\]. Our hybrid method produces a contour map of correlation intensity on a period plane defined by two independent period axes corresponding to the two light curves. Peaks on the main diagonal represent the simultaneous change in signals at the same period. Cross peaks located at the off diagonal positions represent the simultaneous change in signals at two different periods. However, we have never observed these cross peaks in objects we analyzed up to now. Positive correlation value means the two periodic signals increase or decrease together in the same direction.
![A schematic example of 2D correlation map. Signals present in light curve 1 and 2 are given on the top and right of the map. Cross peaks off diagonal are not detected in AGN light curves.\[fig2\]](ILUSTRACIJA.pdf){width="0.9\linewidth"}
Results and Discussion
----------------------
Firstly, we present a 2D correlation map (see Figure \[fig3\]) of a pair of synthetic curves (a sinusoid and damped sinusoid of period 125 \[a.u\] corrupted with red noise, see right column of Figure \[fig1\]) to illustrate this approach for establishing the presence of common periodicities.
Because of no overlap in this map, the peaks can be readily assigned to specific periodic signals. Peak appearing at the lower left of diagonal positions of the 2D map indicate a clear positive correlation at the period of $122\pm 26$\[a.u.\] with correlation coefficient $>0.8$. However, the peak in upper right corner is related to the Nyqist frequency and is not relevant. Clearly the width of waves in damped sinusoid is changing over time and they become wider then waves of sinusoid. As a consequence the signals are not in perfect phase. Due to this, very weakly correlated positive ($\sim 250$ \[a.u\]) and negative ($\sim 500$ \[a.u\]) islands appear between lower left and upper right correlation islands.
![A 2D correlation map of sinusoidal (y1) and damped sinusoidal (y2) signals, with matching period of 125 \[a.u\] and corrupted with random noise. The peak at lower left of the diagonal indicates period of $122\pm 26$ \[a.u\]. Due to change in width of waves in damped sinusoid, periodic signals are not in perfect phase or antiphase which is indicated by presence of Nyquist period in the upper right corner and two weakly correlated islands in the middle ($\sim 250$\[a.u\] and $\sim 500$ \[a.u\]).\[fig3\]](perioDnoised.pdf){width="0.9\linewidth"}
Furthermore, we also applied our hybrid method to the long-term monitored continuum, H$\alpha$ and H$\beta$ light curves of CL AGN NGC 3516. Our long-term observations are covering 22 years, however due to lack of data after year 2007, here we used only 10-year long part of the light curve up to 2007 (MJD 54500). Resulting 2D correlation maps are shown in Figures \[fig4\] and \[fig5\], allowing effortless identification of links between periodicities in continuum and H$\alpha$, as well as continuum and H$\beta$, respectively.
![A 2D correlation map of continuum and H$\alpha$ emission line of NGC 3516, with matching period of $1580\pm 743$ days. Upper right peak corresponds to whole period covered by observations.\[fig4\]](periodHA.pdf){width="90.00000%"}
Unexpectedly, the strong relationships are present at periods $1580\pm743$ days (i.e. $4.32\pm 2.04$yr [compare to Figure 11 in @2019MNRAS.485.4790S]) and $1385\pm 128$ days (i.e. $3.8\pm 0.4$yr, see Figure \[fig5\]). Calculated correlation maps provide an even clearer picture of the time-dependent changes of periodicities in the light curves than Lomb periodogram [compare to Figure 11 in @2019MNRAS.485.4790S].
![The same as Figure \[fig4\] but for continuum and H$\beta$ emission line of NGC 3516. Detected period is $1385\pm 128$ days.\[fig5\]](periodHB.pdf){width="90.00000%"}
These periodicities are similar to each other, indicating that continuum, H$\alpha$ and H$\beta$ fluctuate with similar periodicity characteristics. It seems that periodicity in continuum, triggers similar variation in H$\alpha$ and H$\beta$ emission lines.
Perturbed AGN accretion disc can be used as an explanation of continuum flux and corresponding spectral variability . It is believed that such perturbations induce the thermal instability in the disc which time scale is [see @2008ApJ...677..884L] $$t_\mathrm{th}=5\frac{0.1}{\alpha}M_\mathrm{BH}[10^{8} M_{\odot}]\sqrt{(\frac{r_\mathrm{d}}{10^{3} r_\mathrm{g}})^3} \mathrm{yr},
\label{instab}$$ where $r_\mathrm{g}=GM_\mathrm{BH}/c^2 $ is a gravitational radius and $r_\mathrm{d}$ is accretion disc dimension. If the thermal instability had formed in the disc of NGC 3516 on a time scale resembling detected periodicity ( $\sim 4$ yr) we can infer the disc dimension using Eq. \[instab\]. Substituting the standard value for $\alpha=0.1$ [see @doi.org/10.3847/1538-4357/aac77d and references therein], the black hole mass of NGC 3516 $M_\mathrm{BH}=4.73 \times 10^{7} M_{\odot}$[@2019MNRAS.485.4790S] in Eq. \[instab\] and adopting detected periodicity as a thermal scale, we obtain accretion disc radius $\sim 0.0024$ pc. This agrees well with [@doi.org/10.1051/0004-6361:20020724] prediction of accretion disc dimension for this object (between 0.004 pc and 0.018 pc) and with indices of dimension variability of emitting region [see detailed discussion in @2019MNRAS.485.4790S]. In order to test hypothesis about recoiling supermassive black hole the broad emission lines velocities offsets should be calculated and modeled [see @doi.org/10.3847/1538-4357/aac77d for detailed explanation]. To confirm periodicity and explain variability of this object, we need more spectral and photometric observations covering at least another ten years. Although this study was carried out with a relatively small time coverage of NGC 3516 time series, it indicates that NGC 3516 is interesting object for periodicity detection, and its monitoring should continue.
Conclusion
==========
In the present study, we show a new application of our hybrid method for periodicity detection in AGN time series. We extended the results obtained in [@2018MNRAS.475.2051K; @10.3847/1538-4357/aaf731], via the analysis of synthetic sinusoidal and damped sinusoidal signals corrupted with red noise as well as observed continuum, H$\alpha$ and H$\beta$ light curves of CL AGN NGC 3516. Our hybrid method successfully recovered the period in synthetic time series (i.e. light curves). Further, it detected periods of $1580\pm743$ days in the continuum and H$\alpha$ as well as $1385\pm 128$ days in the the continuum and H$\beta$ of NGC 3516. If the thermal instability had formed in the accretion disc of NGC 3516 on a time scale resembling detected periodicity ($\sim 4$ yr) we inferred that accretion disc radius is $\sim 0.0024$ pc. This agrees well with [@doi.org/10.1051/0004-6361:20020724] prediction of accretion disc model for this object and with [@2019MNRAS.485.4790S] evidence of its dimension variability.
Both experimental results show the robustness of our method against problems of damped oscillations corrupted with red noise and complex time series of CL AGN NGC 3516. However, in order to validate periodicity detection in NGC 3516 time series, a further decade long-term monitoring is needed.
Acknowledgement. This work was present at 12th SCSLSA in the special session: [*Broad lines in AGNs: The physics of emission gas in the vicinity of super-massive black hole*]{} (In memory of the life and work of dr Alla Ivanovna Shapovalova). This work is supported by project (176001) Astrophysical Spectroscopy of Extragalactic Objects.
[99]{}
Graham, M. J., Djorgovski, S. G., Stern, D., et al. 2015a, MNRAS, 453, 1562–1576
Graham, M. J., Djorgovski, S. G., Stern, D., et al. 2015b, Nature, 518, 74–76
Liu, T., Gezari, S., Burgett, W., et al. 2016, ApJ, 833, 6, 13pp
Charisi, M., Bartos, I., Haiman, Z., et al. 2016, MNRAS, 463, 2145–2171
Kova[č]{}evi[' c]{}, A. B., P[é]{}rez-Hern[á]{}ndez, E., Popovi[' c]{}, L. [Č]{}., Shapovalova, A. I., Kollatschny, W., Ili[' c]{}, D. 2018, MNRAS, 475, 2051–2066
Li, Y. R., Wang, J.-M., Zhang, Z.-X, Wang, K. et al. 2019, ApJSS, 241, 14pp
Kova[č]{}evi[' c]{}, A. B., Popovi[' c]{}, L. [Č]{}., Simi[ć]{}, S., Ili[' c]{}, D., 2019, ApJ, 871, id. 32, 1–11
Barack, L., Cardoso, V., Nissanke, S., Sotiriou, T. P., Askar, A., Belczynski et al. 2019, Classical and Quantum Gravity, 36, 1–272
Graham, M. J., Drake, A. J., Djorgovski, S. G., Mahabal, A. A., Donalek, C. 2013, MNRAS, 434, 2629–2635
VanderPlas, J. T. 2018, ApJS, 236, 22pp
Shapovalova, A. I., Popovi[' c]{}, L. [Č]{}., Afanasiev, V. L., Ili[' c]{}, D., Kova[č]{}evi[' c]{}, A. B., Burenkov, A. N., Chavushyan, V. H., Mar[č]{}eta-Mandi[' c]{}, S. et al. 2019, MNRAS, 485, 4790–4803
Kasliwal, V. P., Vogeley, M. S., Richards, G. T. 2017, MNRAS, 470, 3027–-3048
Bon, E., Jovanovi[ć]{}, P., Marziani, P., Shapovalova, A. I., Bon, N., Jovanovi[' c]{}, V. B., Borka, D., Sulentic, J., Popovi[' c]{}, L. [Č]{}. 2012, ApJ, 759, 8pp
Bon, E., Zucker, S., Netzer, H., Marziani, P., Bon, N., Jovanovi[' c]{}, P., Shapovalova, A.I., Komossa, S., Gaskell, C. M., Popovi[' c]{}, L. [Č]{}. et al. 2016, Ap.JS, 225, 15pp
Oknyansky, V. L., Malanchev, K. L., Gaskell, C. M. In Proceeding of the POS: Revisiting Narrow-Line Seyfert 1 Galaxies and Their Place in the Universe, Padova, Italy, 9–13 April 2018., 1–5
Xiang-Gruess, M., Ivanov, P. B., Papaloizou, J. C. B. 2016, MNRAS, 463, 2242–2264
Kim, D. C., Yoon, I., Evans, A. S. 2018, ApJ, 861, 1–10
Popovi[' c]{}, L. [Č]{}., Shapovalova, A. I., Ili[' c]{}, D., Burenkov, A. N., Chavushyan, V. H., Kollatschny, W., Kova[č]{}evi[' c]{}, A. et al. 2014, A&A, 572, id.A66, 17pp
Liu, H. T., Bai, J. M., Zhao, X. H., Ma, L. 2008, ApJ, 677, 884–894
Popovi[' c]{}, L. [Č]{}., Mediavilla, E. G., Kubi[č]{}ela, A., Jovanovi[ć]{}, P. 2002, A&A, 390, 473–480
|
---
abstract: 'Effects of disorder on the electronic transport properties of graphene are strongly affected by the Dirac nature of the charge carriers in graphene. This is particularly pronounced near the Dirac point, where relativistic charge carriers cannot efficiently screen the impurity potential. We have studied time-dependent conductance fluctuations and magnetoresistance in graphene in the close vicinity of the Dirac point. We show that the fluctuations are due to the quantum interference effects due to scattering on impurities, and find an unusually large reduction of the relative noise power in magnetic field, possibly indicating that an additional symmetry plays an important role in this regime.'
author:
- Atikur Rahman
- Janice Wynn Guikema
- Nina Marković
title: Quantum Interference Noise Near the Dirac Point in Graphene
---
In disordered electronic systems, quantum corrections to the conductance arise due to quantum interference between paths of electrons scattered on random impurities. In the absence of a magnetic field, the electron paths that traverse the loops in a clockwise fashion interfere constructively with the counterclockwise paths through the same loops, resulting in a small change in the conductance. Specifically, the backscattered paths (the paths that return to their origin) lead to a correction to the average conductance of the system, known as weak localization (WL) [@1; @2; @3]. Magnetic field adds a different phase factor to the paths that are identical, but traversed in the opposite sense, removing the WL corrections, but one still observes the universal conductance fluctuations (UCF) as a function of magnetic field or chemical potential, which arise from adding the interference contributions from all possible paths [@4; @5]. The quantum interference contribution to the conductance also fluctuates if the impurity configuration changes over [*time*]{}, leading to time-dependent conductance fluctuations that are expected to cause $1/f$ noise [@6; @7; @8].
In graphene, the quantum interference phenomena are affected by the pseudospin and valley degrees of freedom [@9; @10]. Conservation of pseudospin precludes backscattering, suppressing WL and causing weak antilocalization (WAL) [@11], while intervalley scattering restores the WL [@12; @13; @14]. Additional effects, such as defects and corrugations, can completely suppress the quantum corrections [@15]. Depending on the carrier density and the nature of the disorder, all three regimes (WL, WAL and the suppression of quantum corrections) are observed experimentally [@16; @17]. UCF in graphene depend on the carrier density and the nature of the impurity scattering, but can also depend on the details and the geometry of the sample [@Fal'ko; @18; @Horsell]. In particular, strong intervalley scattering is found to suppress UCF [@Fal'ko; @18], in contrast to its effect on WL. However, the majority of the theoretical and experimental work on quantum corrections has focused on the regime away from the Dirac point. In the close vicinity to the Dirac point (at low doping and low temperatures), the relativistic Dirac quasiparticles are unable to screen the long-range Coulomb interactions in the usual way, altering electron-electron interactions [@Kotov].
In this work, we describe measurements of the time-dependent conductance fluctuations in graphene, focusing specifically on the low-carrier density regime near the Dirac point. We find that the $1/f$ noise is reduced in magnetic field, with a characteristic field and temperature dependence that suggests quantum interference as the origin of the noise. However, the observed relative noise reduction is twice as large as what one might expect based on the fundamental symmetry considerations and the current theoretical understanding of quantum transport in graphene.
![[**(a)**]{} False color scanning electron microscope image of a typical single layer device (SL1). The graphene flake is highlighted in green and the top gates are shown in blue. The distance between the voltage leads is typically around one micrometer. The scale bar is 3 $\mu$m long. [**(b)**]{} Raman spectra of one of the samples. The observed peak structure is characteristic for single layer graphene. [**(c)**]{} Schematic of the four probe measurement setup with external voltage probes. [**(d)**]{} Resistance as a function of top gate voltage for zero back gate voltage for sample SL1. The Dirac point, or the charge neutrality point, is located at $V_{Tg}=-0.3$ V.[]{data-label="0"}](Fig1.jpg){width="8"}
A scanning electron microscope image of a typical top gated device (SL1) is shown in Fig. 1(a) (see Supporting Information for fabrication details). Raman spectroscopy was used to confirm that all samples were single layer graphene, with typical results shown in Fig. 1(b). Electrical measurements were done in a 4-probe geometry, using external voltage probes [@24] as shown on the schematic in Fig. 1(c). The typical resistance as a function of top gate voltage ($V_{Tg}$) with zero back gate voltage ($V_{Bg}$) is shown in Fig. 1(d). The peak in the resistance occurs at the Dirac point, at the value of top gate voltage -0.3 V, at which the carrier density in graphene reaches the minimum.
![[**(a)**]{} Noise power as a function of frequency (plotted on a log-log scale) for two different values of the top gate voltage. Straight line shows the 1/f dependence of the noise spectra. [**(b)**]{} Noise power (left) and resistance (right) as a function of top gate voltage in the vicinity of the Dirac point. [**(c)**]{} Normalized noise power as a function of magnetic field for a single layer graphene device. It is evident that the relative noise is reduced by a factor of four from its zero magnetic field value above a certain field. The straight line indicates the reduction of the zero-field value by a factor of four.[]{data-label="2"}](Fig2.jpg){width="8"}
Low-frequency noise measurements were done using the ac noise measurement technique [@25]. The measured noise power ($S_V$) showed $1/f^{\alpha}$ dependence for either gate (top or back) voltage with values of $\alpha$ close to 1 (Fig. 2(a)). We found that the noise data were highly reproducible over time at any temperature and did not depend on the direction or scan step of the gate voltage. The normalized noise power density ($= fS_V/V^2$ or $fS_I/I^2$) was found to be independent of the bias current or voltage (see Supporting information, Fig. S1), ruling out any issues due to heating by the bias current.
The normalized noise power as a function of the top gate voltage in the vicinity of the Dirac point is shown in Fig. 2(b). We find that the noise decreases upon approaching the Dirac point from both directions, reaching a minimum close to the Dirac point.
When a magnetic field is applied perpendicular to the substrate, the noise power decreases rapidly as shown in Fig. 2(c). After reaching some characteristic value of the field, the relative noise power saturates at the value that is a factor of four smaller than the zero-field value. Assuming that the characteristic field corresponds to threading one flux quantum through a phase-coherent area of the sample, we find the phase coherence length ($\L_{\phi}$) to be in the range between $200-300\; nm$ for our samples.
![[**(a)**]{} Schematic of pairs of electron trajectories that form closed loops. The conductance fluctuations are caused by interference between paths from A to B and those from C to D that intersect somewhere in the interior of the sample. The Diffuson contribution is shown in the upper panel: all the paths in the loop are traversed in the same sense, and the magnetic field does not introduce a relative phase factor. In the Cooperon channel, shown in the bottom panel, the magnetic field introduces a relative phase when the loop is traversed in the opposite sense. In this case, contributions from various loops no longer add in a coherent way and the quantum corrections to conductivity disappear. [**(b)**]{} Magnetoresistance as a function of magnetic field for various top gate voltages with zero back gate voltage (sample SL1). A negative magnetoresistance is observed in the vicinity of the Dirac point, with the maximum of magnetoresistance observed close to zero top gate. As the gate voltage is increased in both directions, the magnetoresistance decreases. [**(c)**]{} Relative noise power for different top gate voltages as a function of a magnetic field (sample SL1). The reduction of the relative noise power by about a factor of four is observed at zero top gate, but this factor decreases for larger values of top gate voltage. []{data-label="6"}](Fig3.jpg){width="8"}
The low-frequency noise was studied experimentally in single-layer graphene transistors and was suggested to be related to the fluctuating charges in the vicinity of graphene [@21; @22; @23]. However, the reduction of the noise power upon application of a small magnetic field observed here is a strong indication that quantum interference effects dominate the low-frequency noise. In the case of disordered metals, the conductance fluctuations can be calculated by considering all possible trajectories that an electron can take while scattering off of random impurities. Particularly important are the combinations of paths that connect two different points in a sample, as shown in Fig. 3(a). There are two contributions to the interference between paths from A to B and those from C to D: diffuson and cooperon contribution. The diffuson contribution is insensitive to the magnetic field, as no relative phase is introduced between the various paths. On the contrary, the magnetic field removes the cooperon contribution, reducing the number of conduction channels by a factor of two and reducing the relative noise by precisely a factor of two [@8].
In the same regime, we find negative magnetoresistance as a function of magnetic field at different gate voltages, is shown in Fig. 3(b). Negative magnetoresistance may be due to WL, but we observe it only in the narrow range of gate voltages in the vicinity of the Dirac point - the negative magnetoresistance decreases as the gate voltage is increased in both directions. The effect of magnetic field on noise also depends on the gate voltage as shown in Fig.3(c). Away from the Dirac point, the noise becomes less sensitive to the magnetic field for both positive and negative gate voltages. The noise characteristics are found to be symmetric with respect to magnetic field, and similar results were found for both back gate and top gate. The four-fold reduction in the noise is not always observed precisely at the Dirac point, but it coincides exactly with the gate voltage at which the maximum negative magnetoresistance is also observed (at small positive gate voltages relative to the Dirac point). Similar behavior is observed as a function of back gate voltage (see Supporting information, Fig. S2). The overall change in resistance is small compared to the change in the noise (see Supporting information, Fig. S3).
According to the present understanding, WL can be observed in graphene in the presence of strong intervalley scattering, which can arise due to atomically sharp potentials (such as edges or atomic defects). In the case of strong intervalley scattering, many aspects of quantum transport in graphene are expected to be identical to those in disordered metals [@10; @14; @Fal'ko; @18]. In particular, the conductance fluctuations should exhibit universal properties, as they depend only on the symmetries of the random ensembles that describe the disordered system, and not on their detailed configuration. The variance of the interference-induced conductance fluctuations in graphene will generally have a prefactor that depends on the interplay of inelastic and elastic scattering lengths and the shape of the sample [@Fal'ko; @18], but graphene with broken valley symmetry should belong to the orthogonal Wigner-Dyson symmetry class in the absence of a magnetic field [@10]. Application of a magnetic field will put it in the unitary symmetry class, and a two-fold reduction in the relative noise power will be expected on general symmetry grounds.
Additional two-fold reduction in the relative noise power is expected when the Zeeman energy exceeds $hD/{L_\phi}^2$ [@8]. The two-fold reduction in the noise power has been observed in metals [@19; @20], as well as the additional two-fold reduction at a larger magnetic field due to Zeeman splitting [@26; @27]. In our samples, the Zeeman splitting cannot explain the four-fold reduction, which is observed for small characteristic fields (50 mT), where the Zeeman splitting (0.006 meV) is smaller than both the thermal energy (0.02 meV) and $hD/{L_\phi}^2$ (0.08 meV).
![[**(a)**]{} Normalized noise power is shown as a function of inverse temperature on a log-linear scale. The line represents a linear fit, and the normalized noise power shows a $\exp(1/T)$ dependence on temperature. [**(b)**]{} Relative noise power is plotted as a function of magnetic field for different temperatures. The reduction of the relative noise power by a factor of four is observed at 250mK, but the reduction is smaller at higher temperatures. [**(c)**]{} Resistance as a function of temperature is shown for $V_{Tg}, V_{Bg}=0$, in the regime highlighted in Fig. 2 (b). The slight increase of the resistance with decreasing temperature indicates insulating behavior. [**(d)**]{} Magnetoresistance as a function of magnetic field is shown for two different temperatures. It is evident that a much smaller magnetoresistance is observed at higher temperatures.[]{data-label="3"}](Fig4.jpg){width="8cm"}
The temperature dependence of the noise is also unusual in this regime. For normal metals in the phase coherent regime, the noise due to fluctuating scatterers depends on temperature as $T^{-1}$, as observed in several systems [@19; @20]. We found that the noise decreases with increasing temperature and the normalized noise power shows a $\exp(1/T)$ dependence, as shown in Fig. 4(a) (similar dependence was also found in other work [@28]). As the temperature increases, the relative noise power is still reduced in magnetic field, but by a smaller factor, as shown in Fig. 4(b). The sample resistance increases slightly with decreasing temperature, as shown in Fig. 4(c), so the temperature dependence of the noise cannot be explained by the resistance change. The slowly increasing resistance with decreasing temperature is consistent with WL, as is the fact that the negative magnetoresistance also decreases with increasing temperature, as shown in Fig. 4(d).
It is well known that charge-inhomogeneous regions (puddles) tend to form in the vicinity of the Dirac point [@Martin]. The presence of the top gate also locally dopes the graphene, forming pn junctions at the edges. A random network of puddles or pn junctions could be expected to show conductance fluctuations and magnetoresistance that reflect the fluctuations in the electrostatic environment [@Cheianov1; @Cheianov2]. However, one might expect such fluctuations to increase with temperature, leading to the increase of the noise power with increasing temperature, which is in contradiction to our observations. The temperature dependence of the resistance, magnetoresistance and the relative noise power reduction in magnetic field are all consistent with a decrease of the phase coherence length as the temperature is increased. In addition, the observation of the Aharonov-Bohm oscillations in similar samples confirms that both the electron and the hole transport is phase-coherent across pn junctions and any electron-hole puddles in the vicinity of the Dirac point [@Rahman].
The decrease of the quantum interference noise by a factor of four in magnetic field is not presently understood, but the unusual nature of quantum corrections near the Dirac point may offer insight into phenomena observed in other experiments, such as the four-fold decrease of mobility depending on the nature of impurity scattering [@29], or the anomalous backscattering [@30; @31] near the Dirac point. A better understanding of the quantum interference noise may also provide useful clues about the nature of the impurity scattering in this regime.
0.2in
The authors would like to thank A. Morpurgo, E. McCann, F. Guinea, V. Fal’ko and I. Aleiner for useful comments and suggestions. N. M. would like to thank the Aspen Center for Physics and the NSF Grant 1066293. J. W. G. was supported in part by the M. Hildred Blewett Fellowship of the American Physical Society.
[99]{} B. L. Altshuler, D. Khmelńitzkii, A. I. Larkin, and P. A. Lee, Phys. Rev. B [**22**]{}, 5142 (1980).
S. Hikami, A. I. Larkin, and Y. Nagaoka, Prog. Theor. Phys. [**63**]{}, 707 (1980).
G. Bergmann, Phys. Rep. [**107**]{}, 1 (1984).
P. A. Lee, A. D. Stone, and H. Fukuyama, Phys. Rev. B [**35**]{}, 1039 (1987).
B. L. Altshuler, JETP Lett. [**41**]{}, 648 (1985).
B. L. Altshuler, and B. Z. Spivak, JETP Lett. [**42**]{}, 447 (1985).
S. Feng, P. A. Lee, and A. D.Stone, Phys. Rev. Lett. [**56**]{}, 1960 (1986).
A. D. Stone, Phys. Rev. B [**39**]{}, 10736 (1989).
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. [**81**]{}, 109 (2009).
S. Das Sarma, S. Adam, E. H. Hwang, and E. Rossi, Rev. Mod. Phys. [**83**]{}, 407 (2010).
H. Suzuura, and T. Ando, Phys. Rev. Lett. [**89**]{}, 266603 (2002).
D. V. Khveshchenko, Phys. Rev. Lett. [**97**]{}, 036802 (2006).
P. M. Ostrovsky, I. V. Gornyi, and A. D. Mirlin, Phys. Rev. B [**74**]{}, 235443 (2006).
E. McCann, K. Kechedzhi, V. I. Fal’ko, H. Suzuura, T. Ando, and B. L. Altshuler, Phys. Rev. Lett. [**97**]{}, 146805 (2006).
A. F. Morpurgo, and F. Guinea, Phys. Rev. Lett. [**97**]{}, 196804 (2006).
F. V. Tikhonenko, A. A. Kozikov, A. K. Savchenko, and R. V. Gorbachev, Phys. Rev. Lett. [**103**]{}, 226801 (2009).
S. V. Morozov, K. S. Novoselov, M. I. Katsnelson, F. Schedin, L. A. Ponomarenko, D. Jiang, and A. K. Geim, Phys. Rev. Lett. [**97**]{}, 016801 (2006).
K. Kechedzhi, O. Kashuba, and V. I. Fal’ko, Phys. Rev. B [**77**]{}, 193403 (2008).
M. Y. Kharitonov, and K. B. Efetov, Phys. Rev. B [**78**]{}, 033404 (2008).
D. W. Horsell, A. K. Savchenko, F. V. Tikhonenko, K. Kechedzhi, I. V. Lerner, and V.I. Fal?ko, Solid State Commun. [**149**]{}, 1041 (2009).
V. N. Kotov, B. Uchoa, V. M. Pereira, F. Guinea and A. H. Castro-Neto, Rev. Mod. Phys. [**84**]{}, 1067 (2012).
B. Huard, N. Stander, J. A. Sulpizio, and D. Goldhaber-Gordon, Phys. Rev. B [**78**]{}, 121402(R) (2008).
J. H. Scofield, Rev. Sci. Instrum. [**58**]{}, 985 (1987).
I. Heller, S. Chatoor, J. Männik, M. A. G. Zevenbergen, B. Jeroen, J. B. Oostinga, A. F. Morpurgo, C. Dekker, and S. G. Lemay, Nano Lett. [**10**]{}, 1563 (2010).
Y. Zhang, E. E. Mendez, and X. Du, ACS Nano [**5**]{}, 8124 (2011).
G. Xu, C. M. Torres, Jr., Y. Zhang, F. Liu, E. B. Song, M. Wang, Y. Zhou, C. Zeng, and K. L. Wang Nano Lett. [**10**]{}, 3312 (2010).
N. O. Birge, B. Golding, and W. H. Haemmerle, Phys. Rev. Lett. [**62**]{}, 195 (1989).
A. Trionfi, S. Lee, and D. Natelson, Phys. Rev. B [**70**]{}, 041304(R) (2004).
P. Debray, J.-L.Pichard, J. Vicente and P. N. Tung, Phys. Rev. Lett. [**63**]{}, 2264 (1989).
J. S. Moon, N. O. Birge, and B. Golding, Phys. Rev. B [**53**]{}, R4193 (1996).
V. Skákalová, A. B. Kaiser, J. S. Yoo, D. Obergfell, and S. Roth, Phys. Rev. B [**80**]{}, 153404 (2009).
J. Martin, [*e*t al.]{} Nature Physics [**4**]{}, 144 (2008).
V. V. Cheianov and V. I. Fal?ko, Phys. Rev. B [**74**]{}, 041403(R)(2006).
V. V. Cheianov, V. I. Falko, B. L. Alshuler and I. L. Aleiner, Phys. Rev. Lett. [**99**]{}, 176801 (2007).
A. Rahman, J. W, Guikema, S. H. Lee and N. Markovic, Phys. Rev. B [**87**]{}, 081401(R)(2013).
J.-H. Chen, W. G. Cullen, C. Jang, M. S. Fuhrer, and E. D. Williams, Phys. Rev. Lett. [**102**]{}, 236805 (2009).
Y. Zhang, V. W. Brar, C. Girit, A. Zettl, and M. F. Crommie, Nat. Phys. [**5**]{}, 722 (2009).
Suyong Jung [*e*t al.]{}, Nat. Phys. [**7**]{}, 245 (2011).
|
---
author:
- |
Jing Zhang$^{\dag}$, Jie Tang$^{\dag\sharp}$, Cong Ma$^{\dag}$, Hanghang Tong$^{\ddag}$, Yu Jing$^{\dag}$, and Juanzi Li$^{\dag}$\
\
$^{\sharp}$Tsinghua National Laboratory for Information Science and Technology (TNList)\
\
bibliography:
- 'references.bib'
title: 'Panther: Fast Top-k Similarity Search in Large Networks'
---
|
---
abstract: 'Alnico is a prime example of a finely tuned nanostructure whose magnetic properties are intimately connected to magnetic annealing (MA) during spinodal transformation and subsequent lower temperature annealing (draw) cycles. Using a combination of transmission electron microscopy and atom probe tomography, we show how these critical processing steps affect the local composition and nanostructure evolution with impact on magnetic properties. The nearly 2-fold increase of intrinsic coercivity (${H_\text{ci}}$) during the draw cycle is not adequately explained by chemical refinement of the spinodal phases. Instead, increased Fe-Co phase (${\alpha_1}$) isolation, development of Cu-rich spheres/rods/blades and additional ${\alpha_1}$ rod precipitation that occurs during the MA and draw, likely play a key role in ${H_\text{ci}}$ enhancement. Chemical ordering of the Al-Ni-phase (${\alpha_2}$) and formation of Ni-rich ($\alpha_3$) may also contribute. Unraveling of the subtle effect of these nano-scaled features is crucial to understanding on how to improve shape anisotropy in alnico magnets.'
address:
- 'Ames Laboratory, U.S. Department of Energy, Ames, Iowa 50011, USA'
- 'Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA'
author:
- Lin Zhou
- Wei Guo
- 'Jonathan D. Poplawsky'
- Liqin Ke
- Wei Tang
- 'Iver E. Anderson'
- 'Matthew J. Kramer'
bibliography:
- '../../../../refs/bib/references\_smpl.bib'
title: 'On spinodal decomposition in alnico—a transmission electron microscopy and atom probe tomography study'
---
Magnetic ,Microstructure ,Spinodal decomposition ,Atom-probe tomography ,TEM ,STEM HAADF
Introduction
============
Using an external magnetic field during heat-treatment, magnetic annealing (MA), to control materials’ microstructure has been widely studied since the 1950s [@mccurrie1982coll-c3sp]. MA promotes formation of texture, biases spinodally decomposed phase morphologies, promotes martensitic transformation in ferrous alloys, changes phase transformation temperatures, etc [@watanabe2006sm; @watanabe2006jms]. Cahn showed that MA is most effective in alnico magnets at a temperature near the onset of spinodal decomposition (SD) and below the Curie temperature [@cahn1963jap]. Alnico has recently re-attracted a large amount of interest as a near-term, non-rare-earth permanent magnetic alloy for wind power generators and electric vehicle motors [@kramer2012jjmm; @zhou2014am; @mccallum2014armr]. Its lower cost and stable performance over a wide temperature range make it still irreplaceable after eighty years of development.
Unlike rare-earth based permanent magnets, coercivity in alnico is provided by shape anisotropy, instead of intrinsic magneto-crystalline anisotropy. As a result, magnetic properties of alnico strongly depend on the details of its unique microstructure: a periodically arrayed elongated Fe-Co rich (${\alpha_1}$) hard magnetic phase embedded in a continuous non-magnetic Ni-Al-rich (${\alpha_2}$) matrix, formed via SD. Achieving the optimum properties in alnico requires a well controlled and lengthy heat-treatment process, including solutionization of the alloy above , isothermal MA near its Curie temperature and subsequent lower temperature annealing (draw cycles) [@mccurrie1982coll-c3sp; @stanek2010amm; @sergeyev1970mito; @takeuchi1976tjim; @iwama1974tjim; @iwama1970tjim]. The application of MA marks the most important cornerstone in alnico magnets’ development history. MA biases the ${\alpha_1}$ phase’s morphology during SD and make it grow along the $\langle100\rangle$ crystallographic direction closest to the external field [@zhou2014mmte]. The resulting anisotropic spinodal nano-structure has significantly improved coercivity (${H_\text{ci}}$) and remanence (${B_\text{r}}$) of alnico. The biased growth is optimal only when it is performed within a narrow temperature range for a limited time. For example, the ideal morphology (in transverse cross-section) that brings optimum ${H_\text{ci}}$ in higher grade alnico 8 and 9 series is a mosaic structure consisting of periodically arrayed $\sim$ diameter ${\alpha_1}$ phases embedded in a continuous ${\alpha_2}$ matrix obtained by MA at $\sim$ [@zhou2017am].
On the other hand, the microstructure and chemistry changes during the lower temperature draw process are much more subtle, although draw cycles play an equally important role in increasing ${H_\text{ci}}$ in alnico. For example, our recent study showed that for alnico 8H, its coercivity was nearly doubled after drawing, compared to the MA alone [@zhou2017am]. More interestingly, the absolute increase during the draw-related ${H_\text{ci}}$ enhancement was almost independent of the preceding MA temperature. However, the mechanism behind ${H_\text{ci}}$ enhancement during draw remains surprisingly elusive and is not well understood despite the fact that drawing has been practiced for decades. Most previous understanding on draw enhancement of ${H_\text{ci}}$ was based on a simplistic assumption: lower temperature annealing results in a larger composition separation and a larger magnetization difference between ${\alpha_1}$ and ${\alpha_2}$ phases, which increases the ${H_\text{ci}}$ [@mccurrie1982coll-c3sp]. This assumption appears oversimplified, especially considering the recent findings which reveal an insignificant compositional variation before and after the draw [@zhou2017am]. This lack of understanding is mainly due to our inability to access and evaluate the very subtle nature of chemical and microstructural evolution during the draw. Revealing these subtle effects appears crucial to improve shape anisotropy in permanent magnets or reduce coercivity in soft magnetic alloys.
This study was designed to elucidate the structural and chemical evolution in alnico at different stages during heat treatment, especially during the draw process, as well as their relationship with magnetic properties. An isotropic (alnico 8H) 32.4Fe-38.1Co-12.9Ni-7.3Al-6.4Ti-3.0Cu (wt) alloy was chosen for this investigation. A combination of electron backscatter diffraction (EBSD), atom probe tomography (APT), and transmission electron microscopy (TEM) techniques were used to better discover and more precisely characterize the morphology and chemistry in SD phases. Based on our comprehensive characterization results, we discuss ${H_\text{ci}}$ enhancement mechanisms beyond the conventional explanation.
Experimental details
====================
Sample preparation and property measurement
-------------------------------------------
Batch of pre-alloyed powder was made by gas-atomization in Ames Laboratory. Details on magnet alloy consolidation to full density by hot isostatic pressing have been reported elsewhere [@tang2015itom]. The resultant alloy was polycrystalline with randomly oriented grains. Center sections of the alloy were cut into diameter by cylinders. The cylindrical samples were solutionized at for in vacuum and quenched in an oil bath (sample 1). Some samples were then annealed at with an external applied field of for , with corresponding samples labeled as 2 thru 5, respectively. This MA temperature was determined from our previous study which gives optimum alloy magnetic properties [@tang2015itom]. Some samples also underwent an additional low temperature drawing process at for (labeled 6 thru 8), respectively. After MA or low temperature draw, the samples were water quenched to room-temperature. Details of heat-treatment conditions of samples 1–8 are listed in [Table \[tbl:ht-condition\]]{}. Their magnetic properties were measured using a Laboratorio Elettrofisico Engineering Walker LDJ Scientific AMH-5 Hysteresis graph in a closed-loop setup.
[cccccccccc]{}\
[Samples]{} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\
\
\
MA/ () & 0 & 0.5 & 1.5 & 5 & 10 & 10 & 10 & 10\
Draw/ () & 0 & 0 & 0 & 0 & 0 & 1 & 3 & 5\
Characterization
----------------
![EBSD inverse pole figure of sample 5. Insets show a TEM (top) and APT (bottom) sample lifted from the same grain.[]{data-label="fig:01"}](fig01.pdf){width="41.00000%"}
APT and TEM are sensitive methods for investigating details of structure and compositions of materials down to atomic-scale resolution [@miller2007sia; @williams1996book]. In this study, APT analysis of samples 1 to 8 was performed to better understand the SD phase morphology in three dimensions, as well as a more accurate determination of the ${\alpha_1}$ and ${\alpha_2}$ composition and morphology at different heat-treatment stages. TEM was used to reveal the crystallographic relationship between different phases. In each sample, grains with their $\langle001\rangle$ crystal orientation parallel to the external magnetic field direction were first identified on the polished longitudinal sections (electron beam perpendicular to external field direction) using EBSD, on an Amray 1845 field emission SEM. This can exclude the effect of orientation difference between crystallographic $\langle001\rangle$ and external field on SD microstructure [@zhou2014mmte]. Both APT and TEM samples were then lifted-out from the same selected grain to proceed with a more detailed structural analysis, as shown in [Fig. \[fig:01\]]{}. An FEI Nova 200 dual-beam focused ion beam (FIB) instrument was used to perform lift-outs and annular milling of targeted grains to fabricate needle-shaped APT specimens. A wedge lift-out geometry was used to mount multiple samples on a Si microtip array to enable the fabrication of several needles from one wedge lift-out [@thompson2007u]. APT was performed with a local electrode atom probe (LEAP) 4000X HR manufactured by CAMECA Instruments. Samples were run in voltage mode with a base temperature of and pulse fraction at a repetition rate of . The datasets were reconstructed and analyzed using the IVAS 3.6.12 software (CAMECA Instruments). An FEI Helios dual-beam FIB was used to perform lift-out of TEM samples with the sample surface normal parallel to the external field direction (transverse direction). An FEI probe aberration corrected Titan Themis TEM with a Super-X energy dispersive X-ray spectrometer (EDS) was used for structural characterization.
Experimental results
====================
Magnetic properties
-------------------
![(a) Intrinsic coercivity (${H_\text{ci}}$), (b) remanence (${B_\text{r}}$), and saturation magnetization (${M_\text{s}}$) of sample 1 thru 8.[]{data-label="fig:02"}](fig02.pdf){width="40.00000%"}
![Sample 1 (a) HAADF STEM image showing internal SD. (b) Three-dimensional atom probe data of the as-solutionized sample (Sample 1) with isoconcentration surfaces clearly depicting the actual nanostructure of the alloy. Isoconcentration values are Ni (green), Cu (orange) and Fe (pink), which border the Ni, Cu, and Fe rich regions, respectively. (c) HRSTEM image shows a coherent lattice between ${\alpha_1}$, ${\alpha_2}$ and Cu-enriched clusters. Yellow arrows indicate locations of Cu-enriched clusters. (d) and (e) are corresponding EDS Cu mapping and FFT of (c), respectively. (f) spatial distribution of Cu clusters, which are displayed by Cu isosurfaces. Fe and Ni atoms are also displayed.[]{data-label="fig:03"}](fig03.pdf){width="49.00000%"}
Magnetic properties of Samples 1-8 are summarized in [Fig. \[fig:02\]]{}. Compared with the as-solutionized sample (sample 1), short time MA (30 s, sample 2) already results in a dramatic improvement of ${H_\text{ci}}$ from to , and ${B_\text{r}}$ from to . Increasing the MA time to further increase ${H_\text{ci}}$ ($\sim$), but ${B_\text{r}}$ plateaus between and of MA. The saturation magnetization (${M_\text{s}}$) of sample 1 and MA samples (samples 2 to 5) is similar, indicating a similar volume fraction of the ${\alpha_1}$ phase in those samples [@zhou2014am]. Drawing can triple the ${H_\text{ci}}$ to (sample 8), with the most obvious improvement occurring after the initial drawing (Sample 6); however, both ${M_\text{s}}$ and ${B_\text{r}}$ slightly drop after the draw.
Microstructure after solutionization
------------------------------------
![Nanostructures of sample 2 (a,b), sample 3 (c,d) and sample 5 (e,f) revealed by HAADF STEM images and APT. Examples of small ${\alpha_1}$ phases between two adjacent large ${\alpha_1}$ rods, and regions with ${\alpha_1}$ clusters are indicated by blue and red arrows, respectively. White arrows indicate regions with a structure similar to sample 1. For clarity, the APT reconstruction only shows the Ni (green) and Fe (pink) atoms. Isoconcentration surfaces shown here are Ni (green), Cu (orange) and Fe (pink). []{data-label="fig:04"}](fig04.pdf){width="\wlfig"}
Chemical segregation has already begun after of solutionization, although the sample has almost no coercivity. High-angle-annular-dark-field (HAADF) scanning transmission electron microscopy (STEM) imaging was used to minimize strain contrast and differentiate phase morphology more clearly than can be achieved with traditional diffraction contrast TEM. The ${\alpha_1}$ phase shows brighter contrast in HAADF STEM image due to the higher averaged atomic number of the elements. A mixture of $\sim$ ${\alpha_1}$ disks, and $\sim$ long rods with a diameter of $\sim$ was observed in sample 1 ([Fig. \[fig:03\]]{}a). The disks and rods are sometimes connected to each other. The ${\alpha_1}$/${\alpha_2}$ interface is slightly blurry, which may be due to incomplete phase separation or sample thickness with respect to the 3D microstructure. Isoconcentration surfaces within the APT data of sample 1 reveals an interpenetrating nature of the ${\alpha_1}$ and ${\alpha_2}$ phases, as shown in [Fig. \[fig:03\]]{}b. The ${\alpha_1}$ and ${\alpha_2}$ phases are continuous with meandering boundaries within the entire analyzed volume. Both the ${\alpha_1}$ and ${\alpha_2}$ phases have $\sim$ diameters. The appearance of disk and rods within the STEM images is a projected view of the ${\alpha_1}$ phase along different crystallographic directions.
A high density of Cu-enriched clusters were detected inside ${\alpha_2}$, as shown in [Fig. \[fig:03\]]{} b and f. The Cu-enriched clusters have an average diameter of $\sim$ and occupy $\sim$ of the alloy’s volume. The Cu concentration at the cluster center was measured to be $\sim$ for sample 1 by APT. High-resolution HAADF STEM image (HRSTEM, [Fig. \[fig:03\]]{}c) and corresponding fast-Fourier-transform (FFT) ([Fig. \[fig:03\]]{}e) shows coherent ${\alpha_1}$/${\alpha_2}$, ${\alpha_2}$/Cu interfaces. This result implies that the Cu-cluster and ${\alpha_2}$ has the same lattice structure. Moreover, due to small size of the Cu-clusters, their positions (as indicated by arrows in [Fig. \[fig:03\]]{}c) in the HAADF STEM image can only be identified by overlaying the matching EDS elemental mapping ([Fig. \[fig:03\]]{}d).
Microstructure after magnetic field annealing
---------------------------------------------
![Sample 5 (a) APT reconstructed volumes show the distribution of Cu clusters shown by isoconcentration surfaces, (b) HRSTEM image and corresponding (c), (e) and (f) EDS Cu, Ni and Fe map, respectively (scale bar is ). (d) FFT of (b). Orange arrows indicate location of Cu-enriched clusters. []{data-label="fig:05"}](fig05.pdf){width="\wlfig"}
![Nanostructures of sample 6 (a,b) and sample 8 (c,d) revealed by HAADF STEM images and APT. Examples of small ${\alpha_1}$ rods, ${\alpha_1}$ clusters and Cu-enriched rods are indicated by blue, red and yellow arrows respectively. For clarity, the APT reconstruction only shows the Ni (green) and Fe (pink) atoms. Isoconcentration surfaces shown here are Ni (green), Cu (orange) and Fe (pink). []{data-label="fig:06"}](fig06.pdf){width="\wlfig"}
A faceted rod-shaped ${\alpha_1}$ phase ($\sim$) is developed during the MA process. As shown in [Fig. \[fig:04\]]{}a, a well-defined mosaic structure composed by a $\{110\}$ faceted ${\alpha_1}$ phase with a $\sim$ diameter was quickly developed in a large volume fraction of sample 2 after of MA. Some areas of sample 2 shows blurred imaging contrast, as indicated by white arrows, which implies regions with morphological differences. Increasing the MA time to (sample 3, [Fig. \[fig:04\]]{}c) caused a slight increase in the ${\alpha_1}$ diameter to $\sim$. $\{100\}$ facets started to appear in some ${\alpha_1}$ phases. Small ${\alpha_1}$ particles ($\sim$) located between $\{100\}$ facets of two adjacent large ${\alpha_1}$ rods were also observed, as indicated by blue arrows in [Fig. \[fig:04\]]{}c. Moreover, clusters of ${\alpha_1}$ particles ($\sim$), as pointed out by red arrows, were formed, possibly from the white arrow regions indicated in [Fig. \[fig:04\]]{}a. A further MA time increase to (sample 5, [Fig. \[fig:04\]]{}e) modified the ${\alpha_1}$ phase diameter size distribution into a bimodal distribution with large ($\sim$) and small ($\sim$) ${\alpha_1}$ phases. Isoconcentration surfaces within the APT data clearly show ${\alpha_1}$ phase elongation in all MA-treated samples, as demonstrated in [Fig. \[fig:04\]]{}b, d, and f. Regions with a morphology similar to sample 1 is also visible, as indicated by the black arrow in [Fig. \[fig:04\]]{}b, which may be regions indicated by the white arrow in [Fig. \[fig:04\]]{}a. These regions are most likely areas from the solutionization process that have not yet been modified from short time MA processes. Transformation of big isolated ${\alpha_1}$ blocks from a parallelogram shape into an octagon shape with a cross-sectional diameter of $\sim$ after MA is also obvious. For all samples, the ${\alpha_2}$ phase is continuous.
Isoconcentration surfaces within the APT data reveal that the location of Cu-enriched clusters tends to follow the edge of two adjacent $\{110\}$ facets in sample 2 ([Fig. \[fig:04\]]{}b). With increasing MA time, the region between two $\{100\}$ facets of ${\alpha_1}$ shows a much higher Cu-enriched cluster density, as shown in [Fig. \[fig:05\]]{}a. This is also the area where most of the small ${\alpha_1}$ phase is located. [Figure \[fig:05\]]{}b shows a HRSTEM image of sample 5. Small ${\alpha_1}$ phases with a $\sim$ size are clearly visible between two $\{100\}$ facets of large ${\alpha_1}$ rods. Locations of the Cu-clusters (indicated by orange arrows) were identified by comparing the matching EDS Cu elemental mapping ([Fig. \[fig:05\]]{}c). FFT analysis ([Fig. \[fig:05\]]{}d) shows that the Cu/${\alpha_2}$ interface is coherent, which implies that the Cu-clusters still have the same lattice structure as ${\alpha_2}$ after of MA. APT data shows no obvious change in Cu-cluster diameter ($\sim$), volume fraction () and composition () from sample 2 thru 5.
Microstructure after low temperature drawing
---------------------------------------------
![Sample 7 (a) Cu atom map in the reconstructed APT volume, (b) HAADF STEM image and corresponding (c) EDS Cu mapping (scale bar is ). (d) HRSTEM image shows lattice distortion in the Cu-enriched phase. (e) FFT of (d) with red arrows indicating splitting of (110) diffraction spots.[]{data-label="fig:07"}](fig07.pdf){width="\wlfig"}
![The elemental composition of ${\alpha_1}$ (a) and ${\alpha_2}$ phases (b) measured by atom probe tomography from sample 1 thru 8.[]{data-label="fig:08"}](fig08.pdf){width="\wlfig"}
A slight increase of large ${\alpha_1}$ rod diameters to $\sim$ after a low temperature draw was observed by STEM imaging and APT, as shown in [Fig. \[fig:06\]]{}. Small ${\alpha_1}$ phases in between two large ${\alpha_1}$ phases agglomerate and form smaller ${\alpha_1}$ rods with a $\sim$ diameter, as indicated by blue arrows in [Fig. \[fig:06\]]{}a and c. Regions with ${\alpha_1}$ clusters, as indicated by red arrows in [Fig. \[fig:06\]]{}a, tend to disappear after longer drawing hours ([Fig. \[fig:06\]]{}c). Moreover, additional ${\alpha_1}$ phase precipitates from the ${\alpha_2}$ phase after longer drawing hours, as shown by the pink isosurfaces within the ${\alpha_2}$ phases in [Fig. \[fig:06\]]{}b and d. These ${\alpha_1}$ clusters are slightly bigger in sample 8 than in sample 6.
A distinctive morphological change was observed for the Cu-enriched phase after drawing. A transformation from clusters to rod shapes occurs, as shown by the APT Cu elemental map in [Fig. \[fig:07\]]{}a. From samples 6 to 8, the Cu-rods show an average diameter of $\sim$, which is 2-3 times larger than those found in sample 5. Moreover, the central composition of Cu tends to increase substantially from in sample 5 to in sample 6, and finally to in sample 8. Although APT aberrations could distort the particle concentrations, with a greater distortion for smaller particles, structural differences revealed by STEM indicate a higher Cu concentration (brighter contrast) for the larger particles in sample 8, consistent with the APT results. [Figure \[fig:07\]]{}b shows a HAADF STEM image of sample 8, the bright contrast Cu-enriched phase is clearly visible, as confirmed by matching EDS Cu elemental mapping in [Fig. \[fig:07\]]{}c. The HRSTEM image shows lattice distortion in the bright Cu-enriched phase region ([Fig. \[fig:07\]]{}d), which is also manifested by a streaking/satellite peak in the (110) spots of the image FFT ([Fig. \[fig:07\]]{}e), which is further evidence that the Cu content is higher in the drawn samples.
Chemistry evolution in ${\alpha_1}$ and ${\alpha_2}$ phases
-----------------------------------------------------------
The ${\alpha_1}$ and ${\alpha_2}$ phase compositions in samples 1 thru 8 are summarized in [Fig. \[fig:08\]]{}. Results were extracted from cropped volumes of the APT data that were completely contained within the phases and away from the interface, so that the chemical variation caused by the interfacial profile could be excluded. The ${\alpha_1}$ phase chemistry was relatively stable during the whole heat-treatment process, except for a slight increase in the Fe and Co content after drawing. More obvious chemistry changes were detected in the ${\alpha_2}$ phase. After the first of MA, there was an increase of Co and Ti content, and a decrease of Al content, while the chemical composition tends to be stable during the following MA processes. Drawing at gradually increased the Al and Ni concentration, while that of Co and Fe decreased. The Ti concentration plateaued during the MA and drawing step. This indicates that the diffusion speed of all elements is fast enough at to approach its thermodynamically stable concentration within , however, there is an obvious diffusion speed decrease at , and therefore, it takes a much longer time to reach the equilibrium concentration.
Discussion
==========
The APT combined with detailed TEM and magnetization measurements provide a clear picture of relationships between the phase evolution and magnetic properties. Observation of phase separation in sample 1 indicates that oil quenching cannot provide a fast enough quench to bypass initiation of the SD. After only of MA, the optimal coercivity imparted from this step is observed, even though the APT and TEM shows further morphological changes up to of MA. The small changes in chemistry from sample 2 to 5 implies that the chemical diffusion is rapid and the observed gradual morphology change is more likely to be driven by minimization of interfacial and magnetic energy. After MA, particles are not only elongated, but also become less interconnected to each other, which helps increase ${H_\text{ci}}$.
The draw has a remarkable effect on both ${H_\text{ci}}$ and refinement of the chemistry between the ${\alpha_1}$ and ${\alpha_2}$ phases. The conventional explanation is that drawing further increases the chemical separation of the two phases [@mccurrie1980itom]. However, our results suggest that the draw effect on ${H_\text{ci}}$ enhancement likely involves several mechanisms, including chemistry, ordering, as well as subtle structural features, such as evolution of the Cu-enriched phase. For the ${\alpha_1}$ phase, the Fe and Co concentration slightly increases after the first draw step (comparing sample 5 and 6), then it is nearly constant. On the other hand, in the ${\alpha_2}$ phase, Fe and Co contents continue to decrease with increasing draw time, which is likely due to the coarsening/growth ${\alpha_1}$ in ${\alpha_2}$ of precipitates. Overall, the chemical variation is small and may not change the magnetization of the ${\alpha_2}$ phase significantly. On the other hand, site ordering in the ${\alpha_2}$ phase may play an important role. Our previous study showed that formation energy is lower and ${T_\text{C}}$ decreases with increasing ${\alpha_2}$ site ordering (from BCC to DO$_3$ and L2$_1$) [@zhou2014am]. Thus, draw annealing may promote site ordering in the ${\alpha_2}$ region, possibly due to a decreases in the Fe and Co content in the ${\alpha_2}$ phase. Considering that ${T_\text{C}}$ of ${\alpha_2}$ is near room temperature, both chemistry changes and site ordering decrease magnetization of the ${\alpha_2}$ phase at room temperature, which increases ${H_\text{ci}}$.
The most dramatic change is in the growth of the Cu-enriched regions. Transformation of small Cu-enriched clusters into larger and longer Cu-enrich rods may be driven by minimization of interfacial energy as the center of the clusters has higher and higher Cu concentration. Similar elongated precipitates along elastic soft $[100]$ direction has been reported in Cu2at.%Co system [@heinrich2007sia]. These elongated large Cu-rods may provide better pinning for magnetic domain wall movement. The Cu-enriched phase not only becomes bigger and longer but also less magnetic, which can further separate ${\alpha_1}$ rods. The Cu lattice shearing from the *bcc* structure may be because the *fcc* structure of Cu is thermodynamically more stable. Moreover, since some branching types may be very detrimental to ${H_\text{ci}}$, a larger Cu cluster can isolate two originally connected ${\alpha_1}$ rods and increases ${H_\text{ci}}$ [@ke2017apl]. Finally, formation of small ${\alpha_1}$ rods or even chains of spheres, along with the previously reported formation of Ni-rich ($\alpha_3$) separation phase at the ${\alpha_1}$/ ${\alpha_2}$ interface can also help to increase ${H_\text{ci}}$ [@nguyen2017pra].
Conclusions
===========
Coercivity enhancement is a complex interplay between the intrinsic properties of the alnico alloy and its nanostructure. With MA, the kinetics of the SD is rapid and the near optimum geometric spacing is quickly reached due to higher annealing temperature. MA sets the template for the spinodal and locks in remanence, while the draw process is responsible for the finer microstructural and chemical tuning, which controls the coercivity. The profound effect of draw on improving ${H_\text{ci}}$ is likely due to a combination of several mechanisms, including chemical, site ordering, and subtle microstructural variations. The draw process does not introduce dramatic microstructural changes of ${\alpha_1}$ and ${\alpha_2}$ phases, but does affect the size, shape, and distribution of the intervening Cu-rich phase forming in-between these phases. This new understanding provides possible directions for further property enhancement of alnico.
Acknowledgment {#acknowledgment .unnumbered}
==============
Research was supported by U.S. DOE, Office of Energy Efficiency and Renewable Energy (EERE), under its Vehicle Technologies Office, Electric Drive Technology Program, through the Ames Laboratory, Iowa State University under contract DE-AC02-07CH11358. APT rwas conducted at ORNL’s Center for Nanophase Materials Sciences (CNMS), which is a DOE Office of Science User Facility.
References {#references .unnumbered}
==========
|
---
abstract: 'One of Aesop’s (La Fontain’s) famous fables ‘The Ant and the Grasshopper’ is widely known to give a moral lesson through comparison between the hard working ant and the party-loving grasshopper. Here we show a slightly different version of this fable, namely, “The Ant and the Metrohopper," which describes human mobility patterns in modern urban life. Numerous real transportation networks and the trajectory data have been studied in order to understand mobility patterns. We study trajectories of commuters on the public transportation of Metropolitan Seoul, Korea. Smart cards (Integrated Circuit Cards; ICCs) are used in the public transportation system, which allow collection of transit transaction data, including departure and arrival stations and time. This empirical analysis provides human mobility patterns, which impact traffic forecasting and transportation optimization, as well as urban planning.'
author:
- Keumsook
- Jong Soo
- Hannah
- 'M. Y.'
- 'Woo-Sung'
title: 'Sleepless in Seoul: ‘The Ant and the Metrohopper’'
---
Human trajectories are usually analyzed through the use of real world databases [@1; @2; @3; @kslee; @4; @5; @6; @7; @8; @9; @10; @11] such as transportation networks, travel diary and bank note dispersion, as well as mobile phone movement, for use in traffic forecasting and transportation optimization [@12], as well as urban planning [@13]. The Metropolitan Seoul Subway network operates a smart card system which keeps track of travel information on every passenger. Here, we analyze the passenger flows on a single day, based on the transaction data of the smart card on 24 June 2005. The transit transaction database, which holds data on over 10,000,000 transaction per day, contains time/position information on each passenger’s travel. Therefore, it is possible to track the movements of individuals, because each smart card has its own ID. The transaction data cover 2,746,517 passengers who take the subway on the analyzed day. We find that modern urban life can be characterized well by the commute pattern on the public transportation system.
The Metropolitan Seoul Subway system, consisting of nearly 400 stations, carries approximately three million passengers a day. Among them, we analyze those trajectories that have more than two transaction data in one smart card in order to investigate the commute pattern. In this study, ‘morning’ corresponds to ‘prior to 11 AM,’ while ‘daytime’ represents ‘between 11 AM and 5 PM’ and ‘evening is ‘after 5 PM’. Figure \[commutetime\] represents the commute time distribution in the morning and evening, while Figure \[fig1\] exhibits the spatial distribution of the departures in the morning, obtained by considering with geographic information systems. A great portion of people live in the suburbs, so they must return home after work. Surprisingly, many of them stop by somewhere else on their way home, so the spatial distribution of arrivals in the evening (Fig. \[fig2\]) shows features different from those of the departure distribution in the morning. In the evening, a large number of people visit such popular entertainment areas as Gangnam, Shinchon, Hongdae and Hyehwa, which display dense arrival distributions in Fig. \[fig2\].
We classify the commute patterns into three categories, according to the commutate among three places: home, office, and other locales. Category 1 portrays ‘the Ant,’ who has the endless cycle of getting up, going to work and returning home while Category 2 represents ‘the Metrohopper,’ an urban version of ‘the Grasshopper,’ who stops by other locales after work and does not take the subway to return home. The purpose of visiting the other locales by a Metrohopper can be either entertainment (as reflected by the areas in Fig. \[fig2\]) or a secondary job. It is important to note that the Metropolitan Seoul Subway system operates daily from 5 AM to 1 AM. It is likely that most of the Metrohoppers return home very late, by taxi or on foot, after the service hour of the subway. Category 3 corresponds to ‘the Hybrid,’ a cross between ‘the Ant’ and ‘the Metrohopper,’ who stops in other locales after work but eventually returns home by subway. While 45% of the passengers behave as ants, the metrohoppers and the hybrids, 55% of the total, who are not confined only to work and home, characterize Seoul as a “sleepless" metropolis (Table 1).
We also investigated the departure- and the arrival-time distributions. Figure \[fig3\] displays the distributions of (a) the departure times of the first trips and (b) the arrival times of the last trips for three categories: the Ant, the Metrohopper, and the Hybrid. All three categories show similar departure-time distributions, which indicates that they leave home at similar times in the morning. However, the arrival-time distributions for the Metrohoppers and for the Hybrids show two peaks while that for the Ants has only one peak with a shoulder. The second peak in the former corresponds to the after-work activities. Interestingly, the second peak for the Hybrids, who returning home by subway, is more conspicuous than that for the Metrohoppers, who take transportation other than subway.
Unlike the grasshopper in rural life in Aesop’s fable, the metrohopper may not be a loser in urban life. Important business decisions and social gatherings, as well as self-development, tkae place at after-work sessions. Particularly, after-work sessions in Korea often continue until midnight, as a sequel to day work. We, thus, conclude that the celebrated fable ‘The Ant and the Grasshopper’ has a different outlook in modern urban life. Which of the two, the ant or the metrohopper, will be more effective in urban life? This is left as an open question.
M. Barthélemy, A. Barrat, R. Pastor-Satorras and A. Vespignani, Physica A **346**, 34 (2005).
J. Sienkiewicz and J. A. Hołyst, Phys. Rev. E **72**, 046127 (2005).
V. Latora and M. Marchiori, Physica A **314**, 109 (2002).
K. Lee, W.-S. Jung, J. S. Park and M.Y. Choi, Physica A **387**, 6231 (2008).
K. A. Seaton and L. M. Hackett, Physica A **339**, 635 (2004).
W.-S. Jung, F. Wang and H. E. Stanley, Europhys. Lett. **81**, 48005 (2008).
G. M. Viswanathan *et al.*, Nature **381**, 413 (1996).
G. Ramos-Fernandez *et al.*, Behav. Ecol. Sociobiol. **273**, 1743 (2004).
D. W. Sims *et al.*, Nature **451**, 1098 (2008).
R. Schlich and K. W. Axhausen, Transportation **30**, 13 (2003).
D. D. Brockmann, L. Hufnagel and T. Geisel, Nature **439**, 462 (2006).
M. C. González, C. A. Hidalgo and A.-L. Barabási, Nature **453**, 779 (2008).
H.-J. Youn, M. Gastner and H. Jeong, Phys. Rev. Lett. **101**, 128701 (2008).
L. M. A. Bettencourt *et al.*, Proc. Natl. Acad. Sci. U.S.A. **104**, 7301 (2007).
![(Color online) Commute time distribution in the morning and evening. []{data-label="commutetime"}](commutetime.png){width="100.00000%"}
![(Color online) Spatial distribution of the departures in the morning. Black dots correspond to subway stations, and blue lines are ward boundaries in Seoul. []{data-label="fig1"}](fig1.png){width="100.00000%"}
![(Color online) Spatial distribution of the arrivals in the evening. []{data-label="fig2"}](fig2.png){width="100.00000%"}
![(Color online) (a) Departure time distribution of the first trip and (b) arrival time distribution of the last trip of the day. []{data-label="fig3"}](fig3.png){width="100.00000%"}
Category Passengers Percent of total
----------------- ------------ ------------------
1 (Ant) 771,935 45.03%
2 (Metrohopper) 653,046 38.09%
3 (Hybrid) 289,349 16.88%
Total 2,746,517 100.0%
: Numbers of passengers in each category.[]{data-label="table1"}
|
---
author:
- 'M. Junge[^1], N.J. Nielsen[^2], and T.Oikhberg[^3]'
title: Rosenthal operator spaces
---
njnpart1new11 njnpart2new11
rosmatrixnew11 uncomp2.tex refnew11.tex
[^1]: Supported by NSF grant DMS–0301116 and DMS 05-56120
[^2]: Supported by the Danish Natural Science Research Council, grant 21020436.
[^3]: Supported by NSF grant DMS–0500957
|
---
abstract: 'The construction of a computer code to calculate the cross sections for the spin-polarized processes $e^-\gamma\to e^-\gamma,e^-\gamma\gamma,e^-e^+e^-$ to order-$\alpha^3$ is described. The code calculates cross sections for circularly-polarized initial-state photons and arbitrarily polarized initial-state electrons. The application of the code to the SLD Compton polarimeter indicates that the order-$\alpha^3$ corrections produce a fractional shift in the SLC polarization scale of $-$0.1% which is too small and of the wrong sign to account for the discrepancy in the Z-pole asymmetries measured by the SLD Collaboration and the LEP Collaborations.'
address: |
Stanford Linear Accelerator Center\
Stanford University, Stanford, Ca. 94309\
author:
- 'Morris L. Swartz'
title: 'A Complete Order-$\alpha^3$ Calculation of the Cross Section for Polarized Compton Scattering$^\dagger$'
---
0.3in
[Submitted to [*Physical Review D*]{}]{}
1.00in
Introduction {#sec:intro}
============
For some years, polarized Compton scattering, the scattering of circularly-polarized photons by spin-polarized electrons, has been used to measure the degree of polarization of one particle or the other. Circularly-polarized gamma-ray photons from nuclear decays have been polarization-analyzed by measuring the asymmetry in the rates of backscattering from magnetized iron foils (as the foil magnetization is reversed). Similarly, the polarization of electrons in high-energy storage rings and accelerators has been determined from scattering asymmetries of the accelerated beam with beams of optical laser photons. Until quite recently, all such measurements have made use of tree-level (order-$\alpha^2$) expressions for the polarized Compton scattering cross section.
Part of the reason for this has been the unavailability of a next-to-leading-order calculation that is packaged in an easily usable form. The first calculation of the order-$\alpha^3$ virtual and real-soft-photon corrections to unpolarized Compton scattering was published by Brown and Feynman in 1952 [@ref:bf]. This calculation was not confirmed until 1972 when Tsai, DeRaad, and Milton (TDM) published the same corrections for the polarized case [@ref:tdrm]. The TDM calculation, by itself, is sufficient to interpret the results of measurements involving longitudinally polarized electrons for which the presence of additional energetic photons in the final state can be excluded. This is often the case for measurements of gamma-rays that have been scattered from magnetized iron targets. However, accelerator-based polarimeters are often designed to measure transverse electron polarization and generally cannot distinguish between single-photon and multiple-photon final states. These shortcomings were addressed in 1987 by Góngora and Stuart (GS) who published the matrix elements for the hard-photon corrections in a spinor-product form that is suitable for numerical evaluation [@ref:gs]. Their publication also includes spinor-product expressions for the matrix elements of six gauge-invariant tensors used by TDM to calculate their result. These expressions permit the application of the TDM virtual corrections to the case of general initial- and final-state electron spin directions. Finally, in 1989, the complete set of virtual, soft-photon, and hard-photon radiative corrections to polarized Compton scattering was calculated independently by Veltman [@ref:veltman]. Veltman’s paper describes her calculation qualitatively and presents a result in numerical form for two specific cases of an accelerator-based longitudinal polarimeter. Unfortunately, it does not include detailed expressions for the final result nor is the result checked against the TDM or GS calculations.
One of the specific cases discussed by Veltman, the case of a 50 GeV longitudinally-polarized electron colliding with a 2.34 eV photon, is quite close to that of the SLD Compton polarimeter (a 45.65 GeV longitudinally-polarized electron colliding with a 2.33 eV photon). This polarimeter is a key component in the measurement of the left-right $Z$-boson production asymmetry $A_{LR}^0$ which has been performed over several years by the SLD Collaboration [@ref:alr]. At the current time, the measured value of $A_{LR}^0$ is approximately 8% larger than the value of the comparable quantity extracted from measurements of six different $Z$-pole asymmetries by the four LEP Collaborations [@ref:LEP]. Since the SLD and LEP measurements differ by approximately three standard deviations, the discrepancy is more likely to be due to systematic effects than to statistical fluctuations. One possible systematic effect is the absence of radiative corrections from the interpretation of the SLD polarimeter data. Veltman’s calculation implies that radiative corrections would shift the SLD polarization measurements by $-$0.1% of themselves which is far too small and of the wrong sign to account for the 8% discrepancy.
This paper describes a complete order-$\alpha^3$ calculation of polarized Compton Scattering. It was undertaken primarily to check the calculation of Veltman and to determine if radiative corrections to Compton scattering could be responsible for discrepancy between the LEP and SLD measurements of $Z$-pole asymmetries. A second goal was to develop a computer code which could applied to a variety of present and future experimental situations. The main ingredients of this code, the TDM and GS calculations, are sufficient for all present day experimental situations. However, it is likely that a very high energy linear electron-positron collider will be constructed somewhere in the world in the coming decade. Polarized beams are planned for all of the designs now under discussion. All of these projects incorporate Compton Scattering polarimeters into the optical designs of their final focusing systems. If these polarimeters use optical lasers, the $e^-\gamma$ center-of-mass energies will be above threshold for the production of final state $e^+e^-$ pairs. Since the process $e^-\gamma\to e^-e^+e^-$ occurs at order-$\alpha^3$, it must also be included in the computer code. A calculation of the matrix element for this process, based upon the techniques of Ref. [@ref:gs], is described in Section \[subsec:threee\].
The following sections of this paper describe the construction and operation of the Fortran-code COMRAD which calculates the order-$\alpha^3$ cross section for polarized Compton Scattering. Section \[sec:ingred\] describes the ingredients of the calculations which the code is based. Section \[sec:implement\] describes the actual implementation of the various calculations and several cross checks that were performed. Section \[sec:results\] describes the application of the code to several cases of interest. And finally, Section \[sec:summary\] summarizes the preceding sections.
Ingredients {#sec:ingred}
===========
This section describes the ingredients used to construct the code COMRAD. The hard photon corrections, virtual photon corrections, soft photon corrections, and $e^-e^+e^-$ cross sections are discussed in the following sections. Since the $e^-e^+e^-$ cross section calculation makes use of the techniques used to calculate the hard photon corrections, some technical details are presented in Section \[subsec:HPC\] that facilitate the description of the original work presented in Section \[subsec:threee\].
Hard-Photon Corrections {#subsec:HPC}
-----------------------
The calculation of the cross section for the process $e^-\gamma\to
e^-\gamma\gamma$ is based upon the matrix element calculation of Góngora and Stuart [@ref:gs]. Their calculation is the first application of numerical spinor product techniques [@ref:ks] to a case involving massive spinors. These techniques allow one to express any amplitude as a function of the scalar products of two massless spinors $u_\pm(p)$ and their conjugates $\bar{u}_\pm(p)$. The subscripts refer to positive and negative helicity states of a massless fermion of momentum $p$. The only two non-vanishing scalar products, $$\begin{aligned}
{s_+}(p_1,p_2) & = & \bar{u}_+(p_1)u_-(p_2) = -{s_+}(p_2,p_1) \\
{s_-}(p_1,p_2) & = & \bar{u}_-(p_1)u_+(p_2) = -{s_+}(p_1,p_2)^*,\end{aligned}$$ are easy to evaluate numerically. Góngora and Stuart define the photon polarization vector in terms of these quantities so that it is free of axial-vector components and can be used with massive currents, $$\epsilon^\mu_\pm(q,\hat{q}) =
\pm\frac{1}{\sqrt{2}s_\pm(\hat{q},q)}\bar{u}_\pm(\hat{q}) \gamma^\mu u_\pm(q),
\label{eq:epsdef}$$ where: the $\pm$ subscript refers to the helicity of initial-state photons (final-state photons have opposite helicities), $q$ is the photon momentum, and $\hat{q}$ is an arbitrary massless vector. Massive spinors of arbitrary spin direction are defined in terms of massless spinors as follows, $$\begin{aligned}
u(p,s) & = & \frac{s_+(p_1,p_2)}{m} u_+(p_1) + u_-(p_2) \\
\bar{u}(p,s) & = & -\frac{s_-(p_1,p_2)}{m} \bar{u}_+(p_1) + \bar{u}_-(p_2)
\label{eq:upsdef}\end{aligned}$$ where $m$ is the electron mass and the massless vectors, $p_1$ and $p_2$, are defined in terms of the momentum and spin vectors, $p$ and $s$, as follows, $$\begin{aligned}
p_1 & = & \frac{1}{2}\left(p+ms\right) \\
p_2 & = & \frac{1}{2}\left(p-ms\right).
\label{eq:p1p2def}\end{aligned}$$
The actual calculation involves the evaluation of a single Feynman amplitude $D_{\lambda\lambda^\prime\lambda^{\prime\prime}}(q,\hat{q};
q^\prime,\hat{q}^\prime; q^{\prime\prime},\hat{q}^{\prime\prime})$ for the process $e^-(s)\to
e^-(s^\prime)\gamma(\lambda)\gamma(\lambda^\prime)
\gamma(\lambda^{\prime\prime})$ (shown in Fig. \[fg:one\]) where: $\lambda$, $\lambda^\prime$, and $\lambda^{\prime\prime}$ label the helicities of the three photons ($+$ or $-$); and $s$ and $s^\prime$ are the spin-vectors of the initial- and final-state electrons, respectively. The matrix element ${\cal
M}_{\lambda;\lambda^\prime\lambda^{\prime\prime}}(s,s^\prime)$ for the process $e^-(s)\gamma(\lambda)\to
e^-(s^\prime)\gamma(\lambda^\prime)\gamma(\lambda^{\prime\prime})$ can then be constructed from the function $D_{\lambda\lambda^\prime\lambda^{\prime\prime}}$ by reversing the momenta of single photons and by interchanging the momenta and helicities of the remaining identical photons, $$\begin{aligned}
{\cal M}_{\lambda;\lambda^\prime\lambda^{\prime\prime}}(s,s^\prime) & = &
D_{\lambda\lambda^\prime\lambda^{\prime\prime}}(-q,\hat{q};
q^\prime,\hat{q}^\prime; q^{\prime\prime},\hat{q}^{\prime\prime}) +
D_{\lambda\lambda^{\prime\prime}\lambda^\prime}(-q,\hat{q};
q^{\prime\prime},\hat{q}^{\prime\prime}; q^\prime,\hat{q}^\prime) \nonumber \\
& + &
D_{\lambda^\prime\lambda\lambda^{\prime\prime}}(q^\prime,\hat{q}^\prime;
-q,\hat{q}; q^{\prime\prime},\hat{q}^{\prime\prime}) +
D_{\lambda^{\prime\prime}\lambda\lambda^\prime}(q^{\prime\prime},
\hat{q}^{\prime\prime}; -q,\hat{q}; q^\prime,\hat{q}^\prime) \nonumber \\
& + &
D_{\lambda^{\prime\prime}\lambda^\prime\lambda}(q^{\prime\prime},
\hat{q}^{\prime\prime}; q^\prime,\hat{q}^\prime; -q,\hat{q}) +
D_{\lambda^\prime\lambda^{\prime\prime}\lambda}(q^\prime,\hat{q}^\prime;
q^{\prime\prime},\hat{q}^{\prime\prime}; -q,\hat{q}),
\label{eq:dlll}\end{aligned}$$ where $q$, $q^\prime$, and $q^{\prime\prime}$ are the momenta of the incident and final-state photons, respectively. Note that each of the six terms in Eq. \[eq:dlll\] corresponds to an ordinary Feynman diagram.
Two technical issues are relevant to the present discussion and to the presentation of Section \[subsec:threee\]. The first issue concerns the choice of the arbitrary massless momenta: $\hat{q}$, $\hat{q}^\prime$, and $\hat{q}^{\prime\prime}$. Góngora and Stuart point out that one can substantially simplify some of the expressions by a judicious choice of these auxiliary momenta. They present results for two equivalently-simple sets of momenta. This approach provides an important cross check (one must find identical results for both sets of auxiliary momenta) and greatly facilitated the debugging of the GS manuscript and the computer code. A number of typographical errors were discovered in Ref. [@ref:gs] and are listed in Appendix \[sec:errata\]. One should note that the first set of auxiliary momenta (used to calculate GS Eqs. 3.3.1-3.10.2) always produces singularities when the initial state electron is longitudinally polarized whereas the second set (used to calculate GS Eqs. C.1.1-C.8.2) never develops singularities so long as both photons have non-zero energy.
The second issue concerns the evaluation of spinors of negative momenta. In order to preserve the following (very useful) relationship, $$\frac{1}{2}\left(1\pm\gamma_5\right)\not\!p = u_\pm(p)\bar{u}_\pm(p),$$ it is necessary to define negative momentum spinors in the following manner, $$\bar{u}_\pm(-p) = i\bar{u}_\pm(p),\quad\quad u_\pm(-p) = iu_\pm(p).
\label{eq:negp}$$ This, in turn, implies that spinor products of negative arguments behave as follows, $$\begin{aligned}
s_\pm(-q_1,q_2) & = & s_\pm(q_1,-q_2) = is_\pm(q_1,q_2) \\
s_\pm(-q_1,-q_2) & = & -s_\pm(q_1,q_2),\end{aligned}$$ and that external photon polarization vectors are invariant under the transformation $q\to-q$ (see Eq. \[eq:epsdef\]), $$\epsilon^\mu(-q,{\hat{q}}) = \epsilon^\mu(q,{\hat{q}}).
\label{eq:epsneg}$$
The actual cross section for the process $e^-(s)\gamma(\lambda)\to
e^-\gamma\gamma$ is calculated in the center-of-mass (cm) frame from the matrix element given in Eq. \[eq:dlll\] using the following expression, $$\frac{d^5\sigma^{(1)}_{e\gamma\gamma}}{dE_e^\prime d\Omega_e^\prime
dE_\gamma^\prime d\phi_\gamma^\prime}(s,\lambda) =
\frac{1}{64(2\pi)^5E_\gamma(E_e+P_e)}
\sum_{\lambda^\prime,\lambda^{\prime\prime},s^\prime} \left| {\cal
M}_{\lambda;\lambda^\prime\lambda^{\prime\prime}}(s,s^\prime) \right|^2,
\label{eq:eggxs}$$ where: $E_e$ and $P_e$ are the energy and 3-momentum of the incident electron, $E_\gamma$ is the energy of the incident photon, $E_e^\prime$ and $\Omega_e^\prime$ are the energy and direction of the final state electron, $E_\gamma^\prime$ is then energy of one of the final state photons, and $\phi_\gamma^\prime$ is the azimuth of the final state photon with respect to the final-state electron direction [@ref:phinote]. Note that Eq. \[eq:eggxs\] includes a factor of $1/2$ to account for the identical photons in the final state.
Virtual Corrections {#subsec:VC}
-------------------
The matrix element for the process $e^-(s)\gamma(\lambda)\to
e^-(s^\prime)\gamma(\lambda^\prime)$ is expressed by Tsai, DeRaad, and Milton in the following form [@ref:normnote], $${\cal M}^{(j)}_{\lambda;\lambda^\prime}(s,s^\prime) = \frac{1}{2m^4}\sum_{i=1}^6
\bar{u}(p^\prime,s^\prime)\epsilon_{\lambda^\prime}(q^\prime)\cdot{\cal L}_i\cdot\epsilon_{\lambda}(q)u(p,s)\ M^{(j)}_i
\label{eq:mattwo}$$ where $j=0,1$ labels the order of the matrix element and the six gauge-invariant, singularity-free, kinematic-zero-free, Dirac tensors ${\cal
L}_i$ are defined by Bardeen and Tung [@ref:tb]. The authors calculate the six invariant matrix elements $M^{(j)}_i$ within the framework of Schwinger source theory to order-$\alpha$ ($M_i^{(0)}$) and to order-$\alpha^2$ ($M_i^{(1)}$). They explicitly consider the case that the initial- and final-state electrons are longitudinally polarized and express the matrix element as a set of six helicity amplitudes (due to the charge-conjugation and time-reversal symmetries, only six of the eight matrix elements defined in Eq. \[eq:mattwo\] are independent) which are linear combinations of the six invariant matrix elements. The helicity amplitudes are then used to derive an order-$\alpha^3$ expression for the unpolarized cross section which is found to agree with the calculation of Brown and Feynman. This cross check was found to be useful in locating three typographical sign errors in the rendering of the helicity amplitudes (which, given the complexity of the expressions, is a remarkably small number). The errors are listed in Appendix \[sec:errata\].
As was mentioned in the introduction, Góngora and Stuart supply spinor-product expressions for the six tensors, $$T^i_{\lambda\lambda^\prime}(s,s^\prime)=\bar{u}(p^\prime,s^\prime)
\epsilon_{\lambda^\prime}(q^\prime)\cdot{\cal
L}_i\cdot\epsilon_{\lambda}(q)u(p,s),$$ which permits the application of the virtual corrections contained in the invariant matrix elements $M_i^{(1)}$ to the case of general initial-state and final-state electron spin directions. To make use of these, the system of six equations which define the helicity amplitudes was inverted to extract the $M_i$.
The order-$\alpha^2$ and order-$\alpha^3$ cross sections for the process $e^-(s)\gamma(\lambda)\to e^-\gamma$ are then calculated (in the cm-frame) from the order-$\alpha$ and order-$\alpha^2$ matrix elements as follows, $$\begin{aligned}
\frac{d^2\sigma^{(0)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda) & = &
\frac{1}{64\pi^2\left[m^2+2E_\gamma(E_e+P_e)\right]}
\sum_{\lambda^\prime,s^\prime} \left|{\cal
M}_{\lambda;\lambda^\prime}^{(0)}(s,s^\prime)\right|^2 \label{eq:xsegz}\\
\frac{d^2\sigma^{(1V)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda) & = &
\frac{1}{64\pi^2\left[m^2+2E_\gamma(E_e+P_e)\right]}
\sum_{\lambda^\prime,s^\prime} 2\Re\left[{\cal
M}_{\lambda;\lambda^\prime}^{(0)}(s,s^\prime){\cal
M}_{\lambda;\lambda^\prime}^{(1)*}(s,s^\prime)\right],
\label{eq:xsegov}\end{aligned}$$ where the order-$\alpha^3$ cross section has been labeled as $\sigma^{(1V)}_{e\gamma}$ to explicitly indicate that Eq. \[eq:xsegov\] describes virtual corrections only.
Soft-Photon Corrections {#subsec:SPC}
-----------------------
The order-$\alpha^3$ cross section defined in Eq. \[eq:xsegov\] contains a term that depends logarithmically upon a small, but nonzero, fictitious photon mass ($m_\gamma$) used to regulate singularities in the virtual corrections. This unphysical term is cancelled by a similar term which arises in the cross section for $e^-\gamma\to e^-\gamma\gamma$ for slightly massive photons. Brown and Feynman discuss this point at some length in Ref. [@ref:bf]. Since events with additional photons of energy less than some small value $k_\gamma^{min}$ are experimentally indistinguishable from the two-body final state, they explicitly integrate the three-body cross section over the extra photon momenta $q^{\prime\prime}$ in the region $m_\gamma<E_\gamma^{\prime\prime}<k_\gamma^{min}$. The resulting soft-photon cross section is approximately equal to the product of a function $J$ and the order-$\alpha^2$ cross section. The order-$\alpha^3$ 2-body cross section can now be defined as a function of $k_\gamma^{min}$, $$\begin{aligned}
\frac{d^2\sigma^{(1)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda;k_\gamma^{min}) &
= & \frac{d^2\sigma^{(1V)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda;m_\gamma) +
\int_{m_\gamma}^{k_\gamma^{min}} d^3q^{\prime\prime}
\frac{d^5\sigma^{(1)}_{e\gamma\gamma}}{d\Omega_e^\prime
d^3q^{\prime\prime}}(s,\lambda) \nonumber \\
& \simeq & \frac{d^2\sigma^{(1V)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda;m_\gamma)
+ J(m_\gamma,k_\gamma^{min},\Omega_e^\prime)
\frac{d^2\sigma^{(0)}_{e\gamma}}{d\Omega_e^\prime}(s,\lambda),
\label{eq:jdef}\end{aligned}$$ where $\sigma^{(1)}_{e\gamma}$ is independent of $m_\gamma$. One should note that although Eq. \[eq:jdef\] is independent of reference frame, the actual integration [@ref:bf; @ref:tdrm] was performed in the rest frame of the initial-state electron. The use of the resulting expression for $J$ implies that the quantity $k_\gamma^{min}$ is defined in that frame.
The $e^-e^+e^-$ Final State {#subsec:threee}
---------------------------
The cross section for the process $e^-(s)\gamma(\lambda)\to
e^-(s^\prime)e^+(\bar{s})e^-(s^{\prime\prime})$ is calculated using the massive spinor-product techniques given in Ref. [@ref:gs]. Since the work described there doesn’t involve positrons, its authors did not define massive positron spinors. It is extremely straightforward to do this from Eqs. \[eq:upsdef\] and \[eq:p1p2def\] by making the replacement $m\to -m$. This interchanges the massless momentum vectors, $p_1\leftrightarrow p_2$, and yields the following massive positron spinors, $$\begin{aligned}
v(p,s) & = & \frac{s_+(p_1,p_2)}{m} u_+(p_2) + u_-(p_1) \\
\bar{v}(p,s) & = & -\frac{s_-(p_1,p_2)}{m} \bar{u}_+(p_2) + \bar{u}_-(p_1).
\label{eq:vpsdef}\end{aligned}$$ These spinors have the correct normalization and orthogonality properties, $$\begin{aligned}
\bar{u}(p,s)u(p,s) & = & 2m, \quad\bar{v}(p,s)v(p,s) = -2m, \\
\bar{u}(p,s)v(p,s) & = & \bar{v}(p,s)u(p,s) = 0.\end{aligned}$$ Similarly, the evaluation of a massive spinor with a negative momentum leads to the replacements $\{p_1,p_2\}\to\{-p_2,-p_1\}$ and using Eq. \[eq:negp\] one finds the correct behavior to within extra phases, $$\begin{aligned}
v(-p,s) & = & iu(p,s),\quad\bar{v}(-p,s) = i\bar{u}(p,s), \\
u(-p,s) & = & iv(p,s),\quad\bar{u}(-p,s) = i\bar{v}(p,s).\end{aligned}$$ These extra phases do not occur in the case of external photons (see Eq. \[eq:epsneg\]) and must be treated with some care. To avoid disturbing the phase relationships between diagrams, a calculation must be formulated so that all diagrams contain the same number of momentum-reversed massive spinors.
The actual calculation was carried out by calculating spinor-product expressions for two Feynman amplitudes, $D_{1\lambda}$ and $D_{2\lambda}$, for the process $\gamma\to e^-e^+e^-e^+$ as shown in Fig. \[fg:two\]. These expressions are listed in Appendix \[sec:mateee\]. The matrix element is the sum of the eight diagrams generated by reversing one of the positron momenta and by interchanging final state electron momenta (according to Fermi-Dirac statistics), $$\begin{aligned}
{\cal M}_{\lambda}(s,\bar{s},s^\prime,s^{\prime\prime}) & = &
D_{1\lambda}(-p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) - D_{1\lambda}(-p,s;\bar{p},\bar{s};p^{\prime\prime},
s^{\prime\prime};p^\prime,s^\prime) \nonumber \\ & - &
D_{1\lambda}(\bar{p},\bar{s};-p,s;p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) +
D_{1\lambda}(\bar{p},\bar{s};-p,s;p^{\prime\prime},s^{\prime\prime};
p^\prime,s^\prime) \nonumber \\ & + &
D_{2\lambda}(-p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) - D_{2\lambda}(-p,s;\bar{p},\bar{s};p^{\prime\prime},
s^{\prime\prime};p^\prime,s^\prime) \nonumber \\ & - &
D_{2\lambda}(\bar{p},\bar{s};-p,s;p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) +
D_{2\lambda}(\bar{p},\bar{s};-p,s;p^{\prime\prime},s^{\prime\prime};
p^\prime,s^\prime), \label{eq:meee}\end{aligned}$$ where: $p$ and $s$ are the momentum and spin of the initial-state electron, $\bar{p}$ and $\bar{s}$ are the momentum and spin of the final-state positron, $p^\prime$ and $s^\prime$ are the momentum and spin of one final-state electron, and $p^{\prime\prime}$ and $s^{\prime\prime}$ are the momentum and spin of the other final-state electron. This particular formulation does not benefit from algebraic simplifications due to a clever choice of the photon auxiliary momentum $\hat{q}$. It is possible to reduce the total number of terms in the matrix element from 112 to 96 by defining four amplitudes instead of two and by choosing $\hat{q}$ appropriately. As formulated, the choice of $\hat{q}$ is truly arbitrary.
The matrix element given in Eq. \[eq:meee\] is converted into a cross section with an expression that is very similar to Eq. \[eq:eggxs\], $$\frac{d^5\sigma^{(1)}_{eee}}{dE_e^\prime d\Omega_e^\prime dE_e^{\prime\prime}
d\phi_e^{\prime\prime}}(s,\lambda) = \frac{1}{64(2\pi)^5E_\gamma(E_e+P_e)}
\sum_{\bar{s},s^\prime,s^{\prime\prime}} \left| {\cal
M}_{\lambda}(s,\bar{s},s^\prime,s^{\prime\prime}) \right|^2,
\label{eq:eeexs}$$ where $E_e^{\prime\prime}$ is the energy of the second electron and $\phi_e^{\prime\prime}$ is the azimuth of the second electron with respect to the first electron direction [@ref:phinote].
Implementation {#sec:implement}
==============
The Fortran-code COMRAD consists of three weighted Monte Carlo generators: COMTN2, COMEGG, and COMEEE. These perform integrations of the cross sections for the $e^-\gamma$, $e^-\gamma\gamma$, and $e^-e^+e^-$ final states, respectively. Operational details are given in Appendix \[sec:opdet\].
Weighting Scheme
----------------
Each of the generators produces events that consist of momentum four-vectors of the final-state state particles in the laboratory frame. Each event is also accompanied by a vector of four event weights $W_j$. The event weights are defined as follows: $$\begin{aligned}
W_1 & = & \frac{1}{2\rho^{(n)}(x)}\left[\frac{d^n\sigma^{(0)}}
{dx^n}(s,-)+\frac{d^n\sigma^{(0)}}{dx^n}(s,+) \right] \label{eq:wgti} \\
W_2 & = & \frac{1}{2\rho^{(n)}(x)}\left[\frac{d^n\sigma^{(0)}}
{dx^n}(s,-)-\frac{d^n\sigma^{(0)}}{dx^n}(s,+) \right] \label{eq:wgtii} \\
W_3 & = & \frac{1}{2\rho^{(n)}(x)}\left[\frac{d^n\sigma^{(1)}}
{dx^n}(s,-)+\frac{d^n\sigma^{(1)}}{dx^n}(s,+) \right] \label{eq:wgtiii} \\
W_4 & = & \frac{1}{2\rho^{(n)}(x)}\left[\frac{d^n\sigma^{(1)}}
{dx^n}(s,-)-\frac{d^n\sigma^{(1)}}{dx^n}(s,+) \right], \label{eq:wgtiv} \end{aligned}$$ where $n$ is the dimensionality of the integrated space ($n=2$ for the $e^-\gamma$ final state, and $n=5$ for the three-body final states) and $\rho^{(n)}$ is the density of trials in that space.
The sums of the weights yield partly or fully integrated cross sections. It is convenient to define the following notation for these sums, $$\begin{aligned}
&&\sigma^{(0)}_u(x^\prime) = \sum_i W_1^i \\
&&\sigma^{(0)}_p(s;x^\prime) = \sum_i W_2^i \\
&&\sigma^{(1)}_u(x^\prime) = \sum_i W_3^i \\
&&\sigma^{(1)}_p(s;x^\prime) = \sum_i W_3^i, \end{aligned}$$ where the variables $x^\prime$ define the kinematical binning chosen for a particular problem. The sums of the $W_1$ and $W_3$ weights yield the order-$\alpha^2$ and order-$\alpha^3$ unpolarized cross sections, respectively. The sums of the $W_2$ and $W_4$ weights yield the order-$\alpha^2$ and order-$\alpha^3$ polarized cross sections and depend upon the initial spin direction $s$ (all cross sections are given in millibarns). It is also convenient to define notation for the fully corrected cross sections and for the asymmetry functions, $$\begin{aligned}
&&\sigma_u(x^\prime) = \sigma^{(0)}_u(x^\prime)+\sigma^{(1)}_u(x^\prime) \\
&&\sigma_p(s;x^\prime) = \sigma^{(0)}_p(s;x^\prime)+\sigma^{(1)}_p(s;x^\prime) \\
&&A^{(0)}(s;x^\prime) = \frac{\sigma^{(0)}_p(s;x^\prime)}
{\sigma^{(0)}_u(x^\prime)} \\
&&A(s;x^\prime) = \frac{\sigma_p(s;x^\prime)} {\sigma_u(x^\prime)}. \end{aligned}$$
Note that the polarized cross sections are chosen to be the differences of the negative-helicity photon cross sections and the positive-helicity photon cross sections. In particle physics terminology, these are called left-handed-helicity and right-handed-helicity photons, respectively. One should note that in optics terminology, a negative-helicity photon is called Right-Circularly-Polarized (RCP) and a positive-helicity photon is called Left-Circularly-Polarized (LCP).
The generation of multiple weights per event trial allows the user to significantly improve the statistical power of a given set of Monte Carlo trials. The uncertainty on any function of the four quantities $\sigma_j$ = {$\sigma_u^{(0)}$, $\sigma_p^{(0)}$, $\sigma_u^{(1)}$, $\sigma_p^{(1)}$} is always smaller when generated with correlated weights than separate, uncorrelated calculations would yield. The correct estimate of the statistical uncertainty on any such function requires that the user accumulate the full 4$\times$4 error matrix $E_{jk}$, $$E_{jk}=\sum_{i=1}^{\rm ntrial} W^i_jW^i_k,$$ where the sum is over all event trials. This matrix must then be propagated correctly to the final result. As an example, consider the calculation of the uncertainty on the quantity, $\Delta A$, which is the difference of the full, order-$\alpha^3$-corrected polarized asymmetry and the order-$\alpha^2$ asymmetry, $$\Delta A(s;x^\prime) = A(s;x^\prime) - A^{(0)}(s;x^\prime) =
\frac{\sigma_p^{(0)}+\sigma_p^{(1)}}{\sigma_u^{(0)}+\sigma_u^{(1)}} -
\frac{\sigma_p^{(0)}}{\sigma_u^{(0)}} \label{eq:dadef}$$ The correct uncertainty on this quantity is given by the expression, $$\delta(\Delta A) = \sum_{j,k=1}^4 \frac{\partial\Delta A}{\partial\sigma_j}
E_{jk} \frac{\partial\Delta A}{\partial\sigma_k}$$ where $j$ and $k$ label the four cross sections.
Cross Checks
------------
The code COMRAD has been checked in a number of ways. The order-$\alpha^3$ unpolarized cross section $\sigma^{(1)}_{e\gamma}$ calculated from the unpolarized initial-state by COMTN2 is numerically identical to the one calculated from the diagnostic expression given by TDM in Ref. [@ref:tdrm] and to one given by Brown and Feynman in Ref. [@ref:bf]. It is verified that this cross section is rigorously independent of the value chosen for the photon mass $m_\gamma$. It is also verified that the order-$\alpha^3$ polarized cross section is invariant under helicity-flips of both incident particles (as required by parity invariance).
The hard-photon cross section calculated by COMEGG is verified to be independent of the choice of photon auxiliary momenta. The polarized hard-photon cross section is found to be invariant under helicity-flips of both incident particles. The dependence of the cross sections $\sigma^{(1)}_{e\gamma}$ and $\sigma^{(1)}_{e\gamma\gamma}$ on $k_\gamma^{min}$ is shown in part (a) of Fig. \[fg:three\] for the case of a 50 GeV electron colliding with a 2.34 GeV photon (one of the cases considered by Veltman in Ref. [@ref:veltman]). Note that the cross-sections vary by approximately 1.4 mb as $k_\gamma^{min}$ is varied from 30 eV to 10 KeV. The sum of the cross sections, $\sigma^{(1)}_u$, is shown in part (b) of the figure and is constant at 0.002 mb level until $k_\gamma^{min}$ reaches several percent of the maximum photon energy and the two-body approximation for the soft-photon cross section begins to fail. Even then, the 10 KeV point differs by only 0.012 mb from the 30 eV point.
The $e^-e^+e^-$ cross section calculated by COMEEE is found to be independent of the choice of photon auxiliary momentum. The polarized $e^-e^+e^-$ cross section is found to be invariant under helicity-flips of both incident particles.
The sum of the virtual, soft-photon, and hard-photon cross sections calculated by COMRAD is compared with the numerical result presented in Ref. [@ref:veltman] for the case of a 50 GeV electron colliding with a 2.34 GeV photon. The ratio of the unpolarized cross sections $\sigma^{(1)}_u/\sigma^{(0)}_u$ is presented as a function of the laboratory energy of the scattered electron $E^\prime_{lab}$ in part (a) of Fig. \[fg:four\]. The COMRAD calculation predicts that $\sigma^{(1)}_u/\sigma^{(0)}_u$ increases from from $-$0.14% near the kinematical edge at 17.90 GeV to $+$0.2% near the beam energy. The Veltman calculation predicts that the ratio decreases from $+$0.3% near the edge to $+$0.2% near the beam energy. The physically correct behavior follows from a simple kinematical analysis. In the center-of-mass frame, the emission of an additional photon reduces the energy and momentum available to the scattered electron. Given the large mass of the electron, the fractional change in the momentum $P^{\prime}_e$ is larger than than fractional change in the energy $E^\prime_e$. The laboratory energy of a backscattered electron is given by the following expression, $$E^\prime_{lab} = \gamma\left(E^\prime_e-P^{\prime}_e\right),$$ where $\gamma$ is the Lorentz factor for the highly-boosted cm-frame (the velocity is assumed to be one). It is straightforward to show that although $E^\prime_e$ and $P^\prime_e$ are decreased by the emission of an additional photon, the difference $E^\prime_e-P^\prime_e$ increases. The laboratory energy of the backscattered electron is therefore [*increased*]{} by the emission of an additional photon. It is clear that photon emission depopulates the Compton kinematical edge region and that $\sigma^{(1)}_u$ should be negative near the endpoint.
The order-$\alpha$ correction to the longitudinal polarization asymmetry is shown in part (b) of Fig. \[fg:four\]. The quantity $\Delta
A(s_z;{E^\prime_{\rm lab}})$ (defined in Eq. \[eq:dadef\]) predicted by COMRAD is compared with the similar quantity given in Ref. [@ref:veltman]. Good agreement is observed.
Results {#sec:results}
=======
This section describes the application of the COMRAD code to several accelerator-based polarimetry cases. The first case (the SLD polarimeter) deals with the detection of the scattered electrons to measure longitudinal polarization. The second case (the HERA polarimeters) involves the detection of scattered photons to measure longitudinal and transverse electron polarization. The final case (a Linear Collider polarimeter) illustrates the detection of final state electrons when there is sufficient energy to produce the $e^-e^+e^-$ final state.
The SLD Polarimeter
-------------------
The SLD Polarimeter [@ref:alr] is located 33 m downstream of the SLC interaction point (IP). After the 45.65 GeV longitudinally-polarized electron beam passes through the IP and before it is deflected by dipole magnets, it collides with a 2.33 eV circularly-polarized photon beam produced by a pulsed frequency-doubled Nd:YAG laser. The scattered and unscattered components of the electron beam are separated by a dipole-quadrupole spectrometer. The scattered electrons are dispersed horizontally and exit the vacuum system through a thin window. A multichannel Cherenkov detector observes the scattered electrons in the interval from 17 to 27 GeV/c.
The helicities of the electron and photon beams are changed on each beam pulse according to pseudo-random sequences. Each channel of the Cherenkov detector measures the asymmetry in the signals $S_j$ observed when the electron and photon spins are parallel ($|J_Z|=3/2$) and anti-parallel ($|J_Z|=1/2$), $$A^C_j=\frac{S_j(3/2) - S_j(1/2)}{S_j(3/2) + S_j(1/2)}={{\cal P}_e^z}{{\cal P}_\gamma}{{\cal A}}_j$$ where: $j$ labels the channels of the detector, ${{\cal P}_e^z}$ is the electron beam polarization, ${{\cal P}_\gamma}$ is the photon polarization, and ${{\cal A}}_j$ is the analyzing power of the $j^{th}$ channel. The analyzing powers are defined in terms of the Compton scattering cross section and the response function of each channel $R_j$, $${{\cal A}}_j = \frac{\int d{E^\prime_{\rm lab}}\sigma_u({E^\prime_{\rm lab}})R_j({E^\prime_{\rm lab}})A(s_z;{E^\prime_{\rm lab}})}
{\int d{E^\prime_{\rm lab}}\sigma_u({E^\prime_{\rm lab}})R_j({E^\prime_{\rm lab}})} ,$$ where ${E^\prime_{\rm lab}}$ is the laboratory energy of the scattered electron. The unpolarized cross section $\sigma_u({E^\prime_{\rm lab}})$ and longitudinal polarization asymmetry $A(s_z;{E^\prime_{\rm lab}})$ are shown as functions of scattered electron energy in Fig. \[fg:five\]. The order-$\alpha^2$ quantities are shown as dashed curves in parts (a) and (b) of the figure. The fractional correction to the unpolarized cross section $\sigma^{(1)}/\sigma^{(0)}$ is shown as the solid curve in part (a) of the figure. It increases from $-$0.2% near the endpoint at 17.36 GeV to $+$0.2% at the beam energy. The correction to the asymmetry function $\Delta A(s_z;{E^\prime_{\rm lab}})$ is shown as the solid curve in part (b) of the figure. Near the endpoint, $\Delta A$ is almost exactly one thousand times smaller than the order-$\alpha^2$ asymmetry. It becomes fractionally larger near the zero of $A^{(0)}$ at 25.15 GeV.
The effects of the order-$\alpha^3$ corrections upon the analyzing powers of the seven active channels of the Cherenkov detector are listed in Table \[tab:one\]. The nominal acceptance in scattered energy, the order-$\alpha^2$ analyzing power ${{\cal A}}_j^{(0)}$, and the order-$\alpha^3$ fractional correction to the analyzing power are listed for each channel. The SLC beam polarization is determined from the channels near the endpoint (5-7). It is clear that proper inclusion of the radiative corrections increases the analyzing powers by 0.1% of themselves. This decreases the beam polarization by the same fractional amount. Since the left-right asymmetry is the ratio of the measured $Z$-event asymmetry $A_Z$ and the beam polarization, $${A_{LR}}= \frac{A_Z}{{{\cal P}_e^z}},$$ the application of the order-$\alpha^3$ corrections [*increases*]{} the measured value of ${A_{LR}}$ by 0.1% of itself. The corrections are much too small and have the wrong sign to account for the SLD/LEP discrepancy.
The HERA Polarimeters
---------------------
The $e^\pm$ ring of the HERA $e^\pm$-$p$ collider is the first $e^\pm$ storage ring to operate routinely with polarized beams and it is the first storage ring to operate with a longitudinally polarized beam [@ref:HERAbeams]. The ring is instrumented with transverse and longitudinal Compton polarimeters.
### The Transverse Polarimeter
The HERA transverse polarimeter [@ref:HERAtrans] collides 2.41 eV photons from a continuous-wave Argon-Ion laser with the 27.5 GeV HERA positron beam. The scattered photons are separated from the electron beam by the dipole magnets of the accelerator lattice and are detected by a segmented tungsten-scintillator calorimeter located about 65 m from the $e^+$-$\gamma$ collision point. The scattering rate is sufficiently small that the calorimeter measures the energy and vertical position of individual photons.
When the positron beam is transversely polarized, the differential cross section depends upon the azimuthal directions of the scattered particles. The average direction the scattered photons changes when the laser helicity is reversed. The polarimeter measures the projected vertical direction $\theta_y$ and energy ${k^\prime_{\rm lab}}$ of each scattered photon. The shift in the centroid of the $\theta_y$ distribution that occurs with helicity reversal $\delta\theta^{meas}_y({k^\prime_{\rm lab}})$ is proportional to the product of the photon polarization and the vertical positron polarization ${{\cal P}_e^y}$, $$\delta\theta^{meas}_y({k^\prime_{\rm lab}}) = \langle\theta_y\rangle_- -
\langle\theta_y\rangle_+ = {{\cal P}_e^y}\cdot{{\cal P}_\gamma}\cdot\delta\theta_y({k^\prime_{\rm lab}}),$$ where $\delta\theta_y({k^\prime_{\rm lab}})$ is the shift for 100% positron and photon polarizations. This quantity is given by the following expression, $$\delta\theta_y({k^\prime_{\rm lab}}) = 2\frac{\int d\phi_\gamma^\prime
\sigma_p(s_y;{k^\prime_{\rm lab}},\phi_\gamma^\prime)\sin\theta_\gamma^\prime
\sin\phi_\gamma^\prime} {\sigma_u({k^\prime_{\rm lab}})},$$ where $\theta_\gamma^\prime$ and $\phi_\gamma^\prime$ are the polar angle and azimuth of the scattered photon in the laboratory frame. Note that $\theta_\gamma^\prime$ is a constant for fixed ${k^\prime_{\rm lab}}$.
The order-$\alpha^3$ corrections modify the function $\delta\theta_y({k^\prime_{\rm lab}})$. The exact modification depends upon the details of how the polarimeter reacts to the two-photon final state. It is assumed that the segmented calorimeter of the HERA transverse polarimeter cannot distinguish between one-photon and two-photon final states. The energy measured by the calorimeter for two-photon final states is then the sum of the individual photon energies ${k^\prime_{\rm lab}}={k^\prime_{\rm lab}}(1)+{k^\prime_{\rm lab}}(2)$. The measured vertical angle is assumed to the the energy-weighted mean of the individual photon angles, $\theta_y =
[\theta_y(1){k^\prime_{\rm lab}}(1)+\theta_y(2){k^\prime_{\rm lab}}(2)]/{k^\prime_{\rm lab}}$. The order-$\alpha^2$ function $\delta\theta_y^{(0)}({k^\prime_{\rm lab}})$ is plotted as function of laboratory photon energy in Fig. \[fg:six\]. The maximum angular separation of 5.6 $\mu$m occurs near 8 GeV. The fractional change caused by the order-$\alpha^3$ corrections $\Delta\delta\theta_y/\delta\theta_y^{(0)}$ is also shown as a function of ${k^\prime_{\rm lab}}$. Note that the correction is typically $+$0.08% near the maximum separation which would lower the measured transverse polarization by the same fractional amount.
### The Longitudinal Polarimeter
A longitudinal polarimeter at HERA has been built by the HERMES Collaboration [@ref:HERAlong]. The 27.5 GeV HERA positron beam is brought into collision with a 2.33 eV photon beam produced by a pulsed frequency-doubled Nd:YAG laser. The scattered photons are separated from the electron beam by the dipole magnets of the accelerator lattice and are detected by an array of NaBi crystals. Since several thousand scattered photons are produced on each pulse, it is not possible to measure the cross section asymmetry as a function of photon energy. Instead, the calorimeter measures the asymmetry in deposited energy $A_E$ as the photon helicity is reversed, $$A_E = \frac{E_-^{dep}-E_+^{dep}}{E_-^{dep}+E_+^{dep}} = {{\cal P}_e^z}{{\cal P}_\gamma}{{\cal A}}_E,$$ where $E^{dep}_\pm$ is the energy deposited by all accepted photons in the crystal calorimeter. The analyzing power ${{\cal A}}_E$ is given by the following expression, $${{\cal A}}_E = \frac{\int d{k^\prime_{\rm lab}}{k^\prime_{\rm lab}}R({k^\prime_{\rm lab}})\sigma_p(s_z;{k^\prime_{\rm lab}})} {\int d{k^\prime_{\rm lab}}{k^\prime_{\rm lab}}R({k^\prime_{\rm lab}})\sigma_u({k^\prime_{\rm lab}})}$$ where $R({k^\prime_{\rm lab}})$ describes the response of the detector. For this estimate, it is assumed that the calorimeter has uniform response in energy from the minimum accepted energy of 56 MeV (lower energy photons miss the calorimeter) to the maximum energy of 13.62 GeV. The order-$\alpha^2$ analyzing power and the full order-$\alpha^3$ correction are $$\begin{aligned}
{{\cal A}}_E^{(0)} &=& 0.1838 \\
\frac{{{\cal A}}_E-{{\cal A}}_E^{(0)}} {{{\cal A}}_E^{(0)}} &=& +0.20\%.\end{aligned}$$ The fractional correction to the longitudinal polarization scale is therefore $-$0.20%.
A Linear Collider Polarimeter
-----------------------------
Longitudinally polarized beams are likely to be important features of a future Linear Collider. It is assumed that any such machine will include SLC-like polarimetry which detects and momentum-analyzes scattered electrons. The unpolarized cross section $\sigma_u({E^\prime_{\rm lab}})$ and longitudinal polarization asymmetry $A(s_z;{E^\prime_{\rm lab}})$ are shown as functions of scattered electron energy in Fig. \[fg:seven\] for the case of a 500 GeV electron beam colliding with a 2.33 eV photon beam. The order-$\alpha^2$ quantities are shown as dashed curves in parts (a) and (b) of the figure. The cross section is largest near the backscattering edge at 26.42 GeV. The longitudinal asymmetry function is 0.9944 at the kinematic endpoint. It decreases rapidly with increasing energy and passes through zero at 50.19 GeV. The fractional correction to the unpolarized cross section $\sigma^{(1)}/\sigma^{(0)}$ is shown as the solid curve in part (a) of the figure. It increases from $-$1.6% near the endpoint to $+$1.2% at the beam energy. Superimposed upon this is the contribution of the $e^-e^+e^-$ final state which is kinematically constrained to the region $34.36~{\rm GeV}<{E^\prime_{\rm lab}}<386.1~{\rm GeV}$. The effect of this final state is to increase the correction to the 1.0-1.7% level in the kinematically allowed region. The correction to the asymmetry function $\Delta
A(s_z;{E^\prime_{\rm lab}})$ is shown as the solid curve in part (b) of the figure. Near the endpoint, $\Delta A$ is $-$4$\times$10$^{-4}$ and represents a negligible correction. Due to the influence of the $e^-e^+e^-$ final state, it decreases to $-$2.2$\times$10$^{-3}$ near 49 GeV then begins to increase to $+$5.3$\times$10$^{-3}$ near 306 GeV where it is a 1% correction to the asymmetry function.
Summary {#sec:summary}
=======
The construction of a computer code, COMRAD, to calculate the cross sections for the spin-polarized processes $e^-\gamma\to
e^-\gamma,e^-\gamma\gamma,e^-e^+e^-$ to order-$\alpha^3$ has been described. The code is based upon the work of Tsai, DeRaad, and Milton [@ref:tdrm] for the virtual and soft-photon corrections. The hard-photon photon corrections and the application of the virtual corrections to arbitrary electron spin direction are based upon the work of Góngora and Stuart [@ref:gs]. The calculation of the cross section for the $e^-e^+e^-$ final state was performed by the author. As implemented, the code calculates cross sections for circularly-polarized initial-state photons and arbitrarily polarized initial-state electrons. Final-state polarization information is not presented to a user of the code but is present at the matrix element level. The modification of the code to extract this information would not be difficult.
The order-$\alpha^3$ corrections to the longitudinal polarization asymmetry calculated by COMRAD agree well with those of Veltman [@ref:veltman]. However, the order-$\alpha^3$ corrections to the unpolarized cross section calculated by COMRAD do not agree with those of Veltman.
The application of the code to the SLD Compton polarimeter indicates that the order-$\alpha^3$ corrections produce a fractional shift in the SLC polarization scale of $-$0.1%. This shift is much too small and of the wrong sign to account for the discrepancy in the Z-pole asymmetries measured by the SLD Collaboration and the LEP Collaborations.
The application of the code to the photon-based polarimeters at the HERA storage ring indicates that the order-$\alpha^3$ corrections also have small effect on the measurements of the HERA positron polarization. The effects on the transverse polarization measurements are typically less than 0.1%. The effect upon the calibration of the HERMES longitudinal polarimeter is a somewhat larger 0.2%.
The application of the code to a polarimeter at a future Linear Collider indicates that the order-$\alpha^3$ corrections are very small near the Compton edge but increase to the 1% level elsewhere. The $e^-e^+e^-$ final state contributes significantly to the net corrections.
This work was supported by Department of Energy contract DE-AC03-76SF00515. The author would like to thank Robin Stuart and B.F.L. Ward for their helpful comments on this manuscript. Bennie also pointed out an additional discrepancy between the preprint and published versions of Ref. [@ref:gs].
Errata {#sec:errata}
======
The following typographical errors were found in Ref. [@ref:gs]:
1. All of the spinor products of the form $\bar{u}_\pm(q_1)\!{\not\!p}u_\mp(q_2)$ given in Eqs. 3.3-3.10 and C.1-C.8 formally vanish (they are the traces of an odd number of gamma matrices) and should be replaced by $\bar{u}_\pm(q_1)\!{\not\!p}u_\pm(q_2)$ (the helicities of the $\bar{u}$-spinors are correct in all cases and the helicities of the $u$-spinors are wrong in all cases). The right-hand-sides of the spinor product definitions are nearly all correct (see items 4 and 5 below).
2. The sign of the second term in square brackets on the right-hand-side of Eq. 3.5.1 in the preprint is incorrect \[$-{s_+}({p_2},q^{\prime\prime})\cdots$ should be $+{s_+}({p_2},q^{\prime\prime})\cdots$\]. Unfortunately, a serious typesetting error in the published version significantly altered the equation. The correct equation should read as follows, $$\begin{aligned}
D_{++-}(q,{p_2};q^\prime,{p_2};q^{\prime\prime},{p_1}) & = &
-\frac{{s_+}({p_1},{p_2}){s_-}({p_1},q)}
{{s_+}({p_2},q){s_+}({p_2},q^\prime){s_-}({p_1},q^{\prime\prime})}\ \ \ \ \ \ \ \ \ \
\ \ \ \nonumber \\
&\times&\biggl[\frac{{s_+}({p_1^\prime},q^{\prime\prime}){s_-}({p_1^\prime},{p_2^\prime})}{m^2}
\bar{u}_-({p_1})\!{\not\!p}_bu_-({p_2})\bar{u}_-(q^\prime)\!{\not\!p}_au_-({p_2})
\nonumber \\
&&+{s_+}({p_2},q^{\prime\prime}){s_-}({p_1},{p_2^\prime})
\bar{u}_-(q^\prime)\!{\not\!p}_au_-({p_2}) \biggr]. \nonumber\end{aligned}$$
3. The sign of the fourth term in square brackets on the right-hand-side of Eq. 3.6.1 is incorrect \[$-{s_+}({p_2},q^\prime)\cdots$ should be $+{s_+}({p_2},q^\prime)\cdots$\].
4. The right-hand-side of the first of Eqs. 3.6.2 is incorrect. The quantities $p_1$ and $p_2$ should be replaced by ${p_1^\prime}$ and ${p_2^\prime}$, respectively.
5. The left-hand-side of the second of Eqs. 3.7.2 is incorrect. The quantity ${p_1^\prime}$ should be replaced by ${p_2}$. The right-hand-side is also incorrect. The sign of the second term should be flipped \[$-{s_-}(q^{\prime\prime},{p_2^\prime})\cdots$ should be $+{s_-}(q^{\prime\prime},{p_2^\prime})\cdots$\].
6. The second factor in the second term in square brackets on the right-hand-side of Eq. 3.8.1 should be ${s_-}({p_1},q^{\prime\prime})$ instead of ${s_-}({p_1^\prime},q^{\prime\prime})$.
7. The fourth factor in the fourth term in square brackets on the right-hand-side of Eq. C.7.1 should be ${s_-}({p_2^\prime},q^\prime)$ instead of ${s_-}({p_2^\prime},q)$.
8. The heading of Eqs. 4.10 which states that they define quantities of the form $\epsilon_-^\prime{\cal L}_i\epsilon_+$ is correct and all of the left-hand-sides which state the reverse helicity configuration are wrong.
9. The heading of Eqs. 4.11 which states that they define quantities of the form $\epsilon_+^\prime{\cal L}_i\epsilon_-$ is correct and all of the left-hand-sides which state the reverse helicity configuration are wrong.
The following typographical errors were found in Ref. [@ref:tdrm]:
1. The signs of two of the tree-level helicity amplitudes given in Eqs. 5 are incorrect. The signs of the amplitudes $f^{(2)}(-+;+-)$ \[the third amplitude\] and $f^{(2)}(-+;++)$ \[the fifth amplitude\] should be reversed.
2. The sign of the order-$\alpha^2$ amplitude $f^{(4)}(-+;+-)$ defined in Eq. 8 should also be reversed.
The Matrix Element for $e^-\gamma\to e^-e^+e^-$ {#sec:mateee}
===============================================
The matrix for the process $e^-(s)\gamma(\lambda)\to
e^-(s^\prime)e^+(\bar{s})e^-(s^{\prime\prime})$ is calculated from the two amplitudes for the process $\gamma(\lambda)\to e^+(s)
e^-(s^\prime)e^+(\bar{s})e^-(s^{\prime\prime})$ shown in Fig. \[fg:two\]. This formulation is chosen so that each term in Eq. \[eq:meee\] has exactly one negative momentum. The internal momenta shown in the Fig. \[fg:two\] are defined as, $$\begin{aligned}
Q & = & p^{\prime\prime} + \bar{p} \\
p_a & = & -\left(p+Q\right) = p^\prime - q \\
p_b & = & q - p = p^\prime + Q.\end{aligned}$$
Using the techniques described in Ref. [@ref:gs] and the Chisholm identity [@ref:ks], $$\gamma^\mu\bar{u}_\pm(q_1)\gamma_\mu u_\pm(q_2) = 2 \left[
u_\pm(q_2)\bar{u}_\pm(q_1) + u_\mp(q_1)\bar{u}_\mp(q_2) \right],$$ it is straightforward to evaluate the amplitudes $D_{1\lambda}$ and $D_{2\lambda}$. Unfortunately, the exact form for each of these functions depends upon the initial-state photon helicity $\lambda$. They are listed below: $$\begin{aligned}
\lefteqn{D_{1+}(p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) = \frac{2\sqrt{2}e^3}{-2p^\prime\cdot q\
Q^2{s_+}({\hat{q}},q)}\Biggl\{ } \nonumber \\ &\ \ &
{s_-}({p_2^\prime},q){s_+}({p_1},{p_2}){s_+}({\hat{q}},{\bar{p}_1}){s_-}({p_2^{\prime\prime}},{p_2}) +
{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_-}(q,{p_2^{\prime\prime}}){s_+}({\bar{p}_1},{p_1}) \nonumber \\
&\ \ & + {s_-}({p_2^\prime},q){s_+}({\bar{p}_1},{p_1})\Bigl[{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{p_2^{\prime\prime}}) +
{s_+}({\hat{q}},{p_2^\prime}){s_-}({p_2^\prime},{p_2^{\prime\prime}}) - {s_+}({\hat{q}},q){s_-}(q,{p_2^{\prime\prime}}) \Bigr]
\nonumber \\ &\ \ &
+\frac{{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({p_2^{\prime\prime}},{p_2})}{m^2}\Bigl[
{s_-}(q,{p_1^\prime}){s_+}({p_1^\prime},{\bar{p}_1}) + {s_-}(q,{p_2^\prime}){s_+}({p_2^\prime},{\bar{p}_1}) \Bigr] \nonumber \\
&\ \ &
+ \frac{{s_-}({p_1^{\prime\prime}},{p_2^{\prime\prime}}){s_+}({\bar{p}_2},{\bar{p}_1})}{m^2} \biggl( \nonumber \\ &\ \ &
{s_-}({p_2^\prime},q){s_+}({p_1},{p_2}){s_+}({\hat{q}},{p_1^{\prime\prime}}){s_-}({\bar{p}_2},{p_2}) +
{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_-}(q,{\bar{p}_2}){s_+}({p_1^{\prime\prime}},{p_1}) \nonumber \\
&\ \ & + {s_-}({p_2^\prime},q){s_+}({p_1^{\prime\prime}},{p_1})\Bigl[{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{\bar{p}_2}) +
{s_+}({\hat{q}},{p_2^\prime}){s_-}({p_2^\prime},{\bar{p}_2}) - {s_+}({\hat{q}},q){s_-}(q,{\bar{p}_2}) \Bigr] \nonumber
\\ &\ \ &
+\frac{{s_+}({\hat{q}},{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({\bar{p}_2},{p_2})}{m^2}\Bigl[
{s_-}(q,{p_1^\prime}){s_+}({p_1^\prime},{p_1^{\prime\prime}}) + {s_-}(q,{p_2^\prime}){s_+}({p_2^\prime},{p_1^{\prime\prime}}) \Bigr] \biggr)
\Biggr\} \label{eq:dop} \\
& & \nonumber \\
\lefteqn{D_{1-}(p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) = \frac{2\sqrt{2}e^3}{-2p^\prime\cdot q\
Q^2{s_-}(q,{\hat{q}})}\Biggl\{ } \nonumber \\ &\ \ &
{s_-}({p_2^\prime},{\hat{q}}){s_+}({p_1},{p_2}){s_+}(q,{\bar{p}_1}){s_-}({p_2^{\prime\prime}},{p_2}) +
{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_-}({\hat{q}},{p_2^{\prime\prime}}){s_+}({\bar{p}_1},{p_1}) \nonumber \\
&\ \ & + {s_-}({p_2^\prime},{\hat{q}}){s_+}({\bar{p}_1},{p_1})\Bigl[{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{p_2^{\prime\prime}}) +
{s_+}(q,{p_2^\prime}){s_-}({p_2^\prime},{p_2^{\prime\prime}}) \Bigr] \nonumber \\
&\ \ & + \frac{{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({p_2^{\prime\prime}},{p_2})}{m^2}\Bigl[
{s_-}({\hat{q}},{p_1^\prime}){s_+}({p_1^\prime},{\bar{p}_1}) + {s_-}({\hat{q}},{p_2^\prime}){s_+}({p_2^\prime},{\bar{p}_1}) \nonumber \\
&\ \ & - {s_-}({\hat{q}},q){s_+}(q,{\bar{p}_1}) \Bigr]
+ \frac{{s_-}({p_1^{\prime\prime}},{p_2^{\prime\prime}}){s_+}({\bar{p}_2},{\bar{p}_1})}{m^2} \biggl( \nonumber \\ &\ \ &
{s_-}({p_2^\prime},{\hat{q}}){s_+}({p_1},{p_2}){s_+}(q,{p_1^{\prime\prime}}){s_-}({\bar{p}_2},{p_2}) +
{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_-}({\hat{q}},{\bar{p}_2}){s_+}({p_1^{\prime\prime}},{p_1}) \nonumber \\
&\ \ & + {s_-}({p_2^\prime},{\hat{q}}){s_+}({p_1^{\prime\prime}},{p_1})\Bigl[{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{\bar{p}_2}) +
{s_+}(q,{p_2^\prime}){s_-}({p_2^\prime},{\bar{p}_2}) \Bigr] \nonumber \\
&\ \ & +\frac{{s_+}(q,{p_1^\prime}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({\bar{p}_2},{p_2})}{m^2}
\Bigl[ {s_-}({\hat{q}},{p_1^\prime}){s_+}({p_1^\prime},{p_1^{\prime\prime}}) \nonumber \\
&\ \ & + {s_-}({\hat{q}},{p_2^\prime}){s_+}({p_2^\prime},{p_1^{\prime\prime}}) - {s_-}({\hat{q}},q){s_+}(q,{p_1^{\prime\prime}}) \Bigr]
\biggr) \Biggr\} \label{eq:dom} \\
& & \nonumber \\
\lefteqn{D_{2+}(p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) = \frac{2\sqrt{2}e^3}{-2p\cdot q\ Q^2{s_+}({\hat{q}},q)}
\Biggl\{ } \nonumber \\ &\ \ &
- {s_-}({p_1^\prime},{p_2^\prime}){s_+}({\hat{q}},{p_1}){s_+}({p_1^\prime},{\bar{p}_1}){s_-}({p_2^{\prime\prime}},q) +
{s_+}({p_1},{p_2}){s_-}(q,{p_2}){s_-}({p_2^\prime},{p_2^{\prime\prime}}){s_+}({\bar{p}_1},{\hat{q}}) \nonumber \\
&\ \ & - {s_+}({\hat{q}},{p_1}){s_-}({p_2^\prime},{p_2^{\prime\prime}})\Bigl[{s_+}({\bar{p}_1},{p_1}){s_-}({p_1},q) +
{s_+}({\bar{p}_1},{p_2}){s_-}({p_2},q) \Bigr] \nonumber \\
&\ \ & - \frac{{s_+}({p_1^\prime},{\bar{p}_1}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}(q,{p_2})}{m^2}\Bigl[
{s_-}({p_2^{\prime\prime}},q){s_+}(q,{\hat{q}}) - {s_-}({p_2^{\prime\prime}},{p_1}){s_+}({p_1},{\hat{q}}) \nonumber \\
&\ \ & - {s_-}({p_2^{\prime\prime}},{p_2}){s_+}({p_2},{\hat{q}}) \Bigr]
+ \frac{{s_-}({p_1^{\prime\prime}},{p_2^{\prime\prime}}){s_+}({\bar{p}_2},{\bar{p}_1})}{m^2} \biggl( \nonumber \\
&\ \ & - {s_-}({p_1^\prime},{p_2^\prime}){s_+}({\hat{q}},{p_1}){s_+}({p_1^\prime},{p_1^{\prime\prime}}){s_-}({\bar{p}_2},q) +
{s_+}({p_1},{p_2}){s_-}(q,{p_2}){s_-}({p_2^\prime},{\bar{p}_2}){s_+}({p_1^{\prime\prime}},{\hat{q}}) \nonumber \\
&\ \ & - {s_-}({p_2^\prime},{\bar{p}_2}){s_+}({\hat{q}},{p_1})\Bigl[{s_+}({p_1^{\prime\prime}},{p_1}){s_-}({p_1},q) +
{s_+}({p_1^{\prime\prime}},{p_2}){s_-}({p_2},q) \Bigr] \nonumber \\
&\ \ & -\frac{{s_+}({p_1^\prime},{p_1^{\prime\prime}}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}(q,{p_2})}{m^2}
\Bigl[ {s_-}({\bar{p}_2},q){s_+}(q,{\hat{q}}) \nonumber \\
&\ \ & - {s_-}({\bar{p}_2},{p_1}){s_+}({p_1},{\hat{q}}) - {s_-}({\bar{p}_2},{p_2}){s_+}({p_2},{\hat{q}}) \Bigr]
\biggr) \Biggr\} \label{eq:dtp} \\
& & \nonumber \\
\lefteqn{D_{2-}(p,s;\bar{p},\bar{s};p^\prime,s^\prime;p^{\prime\prime},
s^{\prime\prime}) = \frac{2\sqrt{2}e^3}{-2p\cdot q\ Q^2{s_-}(q,{\hat{q}})}
\Biggl\{ } \nonumber \\ &\ \ &
- {s_-}({p_1^\prime},{p_2^\prime}){s_+}(q,{p_1}){s_+}({p_1^\prime},{\bar{p}_1}){s_-}({p_2^{\prime\prime}},{\hat{q}}) +
{s_+}({p_1},{p_2}){s_-}({\hat{q}},{p_2}){s_-}({p_2^\prime},{p_2^{\prime\prime}}){s_+}({\bar{p}_1},q) \nonumber \\
&\ \ & + {s_+}({\hat{q}},{p_1}){s_-}({p_2^\prime},{p_2^{\prime\prime}})\Bigl[{s_+}({\bar{p}_1},q){s_-}(q,{\hat{q}}) -
{s_+}({\bar{p}_1},{p_1}){s_-}({p_1},{\hat{q}}) - {s_+}({\bar{p}_1},{p_2}){s_-}({p_2},{\hat{q}})\Bigr] \nonumber
\\ &\ \ &
+ \frac{{s_+}({p_1^\prime},{\bar{p}_1}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({\hat{q}},{p_2})}{m^2}\Bigl[
{s_-}({p_2^{\prime\prime}},{p_1}){s_+}({p_1},q) + {s_-}({p_2^{\prime\prime}},{p_2}){s_+}({p_2},q) \Bigr]
\nonumber \\ &\ \ &
+ \frac{{s_-}({p_1^{\prime\prime}},{p_2^{\prime\prime}}){s_+}({\bar{p}_2},{\bar{p}_1})}{m^2} \biggl( \nonumber \\
&\ \ & - {s_-}({p_1^\prime},{p_2^\prime}){s_+}(q,{p_1}){s_+}({p_1^\prime},{p_1^{\prime\prime}}){s_-}({\bar{p}_2},{\hat{q}}) +
{s_+}({p_1},{p_2}){s_-}({\hat{q}},{p_2}){s_-}({p_2^\prime},{\bar{p}_2}){s_+}({p_1^{\prime\prime}},q) \nonumber \\
&\ \ & + {s_-}({p_2^\prime},{\bar{p}_2}){s_+}(q,{p_1})\Bigl[{s_+}({p_1^{\prime\prime}},q){s_-}(q,{\hat{q}}) -
{s_+}({p_1^{\prime\prime}},{p_1}){s_-}({p_1},{\hat{q}})- {s_+}({p_1^{\prime\prime}},{p_2}){s_-}({p_2},{\hat{q}}) \Bigr]
\nonumber \\
&\ \ & +\frac{{s_+}({p_1^\prime},{p_1^{\prime\prime}}){s_-}({p_1^\prime},{p_2^\prime}){s_+}({p_1},{p_2}){s_-}({\hat{q}},{p_2})}
{m^2} \Bigl[ {s_-}({\bar{p}_2},{p_1}){s_+}({p_1},q) + {s_-}({\bar{p}_2},{p_2}){s_+}({p_2},q) \Bigr]
\biggr) \Biggr\}
\label{eq:dtm}\end{aligned}$$
Operational Details {#sec:opdet}
===================
The Fortran-code COMRAD [@ref:fortran] consists of a main program COMRAD that controls the three weighted Monte Carlo generators: COMTN2, COMEGG, and COMEEE. These perform integrations of the cross sections for the $e^-\gamma$, $e^-\gamma\gamma$, and $e^-e^+e^-$ final states, respectively. All three generators use a common set of conventions, input parameters, and a common interface routine called WGTHST. The routine WGTHST permits the user to accumulate event weights in a manner that is appropriate to his/her needs. Note that all quantities discussed in this section are assumed to be of type REAL\*8 unless otherwise specified.
The Program COMRAD
------------------
The program COMRAD initializes all quantities and sequentially calls each of the event generators. Communication with the generators occurs through the /CONTROL/ common block which is specified within COMRAD. This common block contains the variables: EB, EPHOT, XME, XMG, KGMIN, ALPHA, PI, ROOT2, BARN, SPIN(3), LDIAG, LBF, and NTRY.
The variables EB and EPHOT specify the energy and laboratory frame to be used in the calculation. It is assumed that the incident electron is moving in the $+$z-direction with an energy EB GeV (${\rm EB}\geq m$). The incident photon is assumed to be moving in the $-$z-direction with an energy EPHOT GeV. The spin of the initial-state electron is specified in its rest frame by the three-vector SPIN(3). The maximum energy of additional soft-photons and the minimum energy of hard-photons is also specified in the electron rest frame ($k_\gamma^{min}$) by the variable KGMIN. The integer variable NTRY sets the number of trials for each of the event generators (COMTN2 and COMEGG generate NTRY trials whereas COMEEE produces smaller event weights and generates NTRY/20 trials). The logical flags LDIAG and LBF activate the calculation of the Tsai-DeRaad-Milton and Brown-Feynman expressions for the corrections to the unpolarized cross section in COMTN2 (for diagnostic purposes). The common block /CONTROL/ also contains several constants used by the generators.
The Generator COMTN2
--------------------
The subroutine COMTN2 simulates the two-body $e^-\gamma$ final states. The calculation is carried out in the center-of-mass frame using Eq. \[eq:jdef\] and Eqs. \[eq:wgti\]-\[eq:wgtiv\] to calculate the event weights $W_1$-$W_4$. The density function $\rho^{(2)}(\Omega_e^\prime)$ is chosen to be uniform in the polar variables $\cos\theta_e^\prime$ and $\phi_e^\prime$, $$\rho^{(2)}(\Omega_e^\prime)=\frac{N_{trial}}{4\pi},$$ where $N_{trial}$ is the number of event trials.
The Generator COMEGG
--------------------
The subroutine COMEGG simulates the three-body $e^-\gamma\gamma$ final states. The calculation is carried out in the center-of-mass frame using Eq. \[eq:eggxs\] and Eqs. \[eq:wgtiii\] and \[eq:wgtiv\] to calculate the event weights $W_3$ and $W_4$ ($W_1$ and $W_2$ are always returned as 0). The five quantities $E_e^\prime$, $\cos\theta_e^\prime$, $\phi_e^\prime$, $E_\gamma^\prime$, and $\phi_\gamma^\prime$ are chosen according to the density function $\rho^{(5)}$ as follows, $$\rho^{(5)}(E_e^\prime,\Omega_e^\prime,E_\gamma^\prime \phi_\gamma^\prime) =
\frac{N_{trial}}{4\pi\cdot2\pi}\cdot\frac{C_e}{E_e^{max}+E_\gamma^{min}
-E_e^\prime}\cdot C_\gamma\left[\frac{1}{E_\gamma^\prime}
+\frac{1}{E_\gamma^{max}+E_\gamma^{min}-E_\gamma^\prime}\right],$$ where: $N_{trial}$ is the number of generated trials (some are later discarded); $E_e^{max}$ and $E_\gamma^{max}$ are the maximum electron and photon energies in the cm-frame; $E_\gamma^{min}$ is the minimum photon energy in the cm-frame; and $C_e$ and $C_\gamma$ are normalization constants given as follows, $$\begin{aligned}
C_e & = & \frac{1}{\ln\left[(E_e^{max}+E_\gamma^{min}-m)
/E_\gamma^{min}\right]} \\
C_\gamma & = & \frac{1}{2\ln\left[E_\gamma^{max}/E_\gamma^{min}\right]}. \end{aligned}$$ The minimum energy in the cm-frame is related to the minimum photon energy in the initial-state electron rest frame as follows, $$E_\gamma^{min} = \frac{m}{E_{cm}}k_\gamma^{min},$$ where $E_{cm}$ is the total center-of-mass energy. Note that a photon emitted in the cm-frame in the $-z$ direction with energy $E_\gamma^{min}$ has an energy $k_\gamma^{min}$ in the electron rest frame. If emitted in any other direction, it has a smaller energy in the electron rest frame.
After all five variables have been chosen, the electron and photon energies are checked for consistency with three-body kinematics (the angle $\theta_{e\gamma}$ between the electron and photon directions must satisfy the condition $|\cos\theta_{e\gamma}|\leq 1$). If this condition is not satisfied, the trial is discarded. If it is satisfied, the four-vectors $p^\prime$, $q^\prime$, and $q^{\prime\prime}$ are generated. The photon energies in the initial-state electron rest frame are then calculated and if either is found to be less than $k_\gamma^{min}$, the trial is discarded. The kinematical boundary of the integration is therefore exactly the same as the one that defines the upper-limit of the soft-photon integration (and the function $J$). The integrated region is thus the complement of the soft-photon region and the sum of the cross sections returned by COMTN2 and COMEEG is independent of $k_\gamma^{min}$. The event generation procedure retains approximately 60% of the generated trials over a wide range incident electron and photon energies.
The Generator COMEEE
--------------------
The subroutine COMEEE simulates the three-body $e^-e^+e^-$ final states. The calculation is carried out in the center-of-mass frame using Eq. \[eq:eeexs\] and Eqs. \[eq:wgtiii\] and \[eq:wgtiv\] to calculate the event weights $W_3$ and $W_4$ ($W_1$ and $W_2$ are always returned as 0). The five quantities $E_e^\prime$, $\cos\theta_e^\prime$, $\phi_e^\prime$, $E_e^{\prime\prime}$, and $\phi_e^{\prime\prime}$ are chosen according to the density function $\rho^{(5)}$ as follows, $$\rho^{(5)}(E_e^\prime,\Omega_e^\prime,E_e^{\prime\prime},\phi_e^{\prime\prime})
= \frac{N_{trial}}{4\pi\cdot2\pi\cdot (E_e^{max}-m)^2},$$ where $N_{trial}$ is the number of generated trials (some are later discarded) and $E_e^{max}$ is the maximum electron energy in the cm-frame. Note that the use of uniform phase space works well near threshold (the polarimetry case) but is inadequate at very high energies.
After all five variables have been chosen, the electron energies are checked for consistency with three-body kinematics (the angle $\theta_{e^\prime e^{\prime\prime}}$ between the electron directions must satisfy the condition $|\cos\theta_{e^\prime e^{\prime\prime}}|\leq 1$). If this condition is not satisfied, the trial is discarded. This procedure retains approximately 70% of the generated trials near the $e^-e^+e^-$ threshold.
The Interface Routine WGTHST
----------------------------
The routine WGTHST allows the user to accumulate the information needed for his/her purposes. The routine is called by the main program once before any event generation to permit initialization. It is called by each of the event generators (COMTN2, COMEGG, and COMEEE) at the end of each event trial. And finally, it is called by the main program after return from the last generator to permit the information to be output.
All communication with the routine occurs through the argument list,
SUBROUTINE WGTHST(IFLAG,NEM,PP,NGAM,QP,NEP,PB,WGT),
where: IFLAG is an integer flag which indicates the initialization call (0), an accumulation call (1), or the output call (2); NEM is an integer which indicates the number of electrons in the final state (1 or 2); PP(4,2) contains the four-vectors of the NEM electrons in the laboratory frame; NGAM is an integer which indicates the number of photons in the final state (0-2); QP(4,2) contains the four-vectors of the NGAM photons in the laboratory frame; NEP is an integer which indicates the number of positrons in the final state (0 or 1); PB(4) is the four-vector of the positron; and WGT(4) contains the four weights $W_1$-$W_4$ defined in Eqs. \[eq:wgti\]-\[eq:wgtiv\]. Note that the event weights have been defined such that correctly normalized total cross sections are obtained by summing the weights exactly once per call to WGTHST. The calculation of final-state particle yields therefore requires that the weights be accumulated each time the given type of particle is encountered.
L.M. Brown and R.P. Feynman, [*Phys. Rev.*]{} [**85**]{}, 231 (1952). W.-Y Tsai, L.L. DeRaad, and K.A. Milton, [*Phys. Rev.*]{} [**D6**]{}, 1428 (1972); K.A. Milton, W.-Y Tsai, and L.L. DeRaad, [*Phys. Rev.*]{} [**D6**]{}, 1411 (1972). A. Góngora-T and R.G. Stuart, [*Z. Phys.*]{} [**C42**]{}, 617 (1989). H. Veltman, [*Phys. Rev.*]{} [**D40**]{}, 2810 (1989); Erratum: [*Phys. Rev.*]{} [**D42**]{}, 1856 (1990). K. Abe [[*et al.*]{}]{}, [*Phys. Rev. Lett.*]{} [**70**]{}, 2515 (1993); K. Abe [[*et al.*]{}]{}, [*Phys. Rev. Lett.*]{} [**73**]{}, 25 (1994); K. Abe [[*et al.*]{}]{}, [*Phys. Rev. Lett.*]{} [**78**]{}, 2075 (1997); R.E. Frey, OREXP-97-03, October 1997, hep-ex/9710016. J. Timmermans, [*Proceedings of the 18$^{th}$ International Symposium on Lepton-Photon Interactions*]{}, 28 July - 1 August 1997, Hamburg, Germany. R. Kleiss and W.J. Stirling, [*Nucl. Phys.*]{} [**B262**]{}, 235 (1985); R. Kleiss and W.J. Stirling, [*Phys. Lett.*]{} [**B179**]{}, 159 (1986); R. Kleiss, [*Z. Phys.*]{} [**C33**]{}, 433 (1987). This expression differs by a factor of $2m$ from Eq. 1 of Ref. [@ref:tdrm] because the spinors in that work are normalized to unity. The azimuth $\phi^\prime$ is defined in any frame which has been rotated such that the electron direction defines the $+$z-axis. W.A. Bardeen and W.-K. Tung, [*Phys. Rev.*]{} [**173**]{}, 1423 (1968); Erratum: [*Phys. Rev.*]{} [**D4**]{}, 3229 (1971). D.P. Barber, [[*et al.*]{}]{}, [*Nucl. Inst. Meth.*]{} [**A338**]{}, 166 (1994); D.P. Barber, [[*et al.*]{}]{}, [*Phys. Lett.*]{} [**B343**]{}, 436 (1995). D.P. Barber, [[*et al.*]{}]{}, [*Nucl. Inst. Meth.*]{} [**A329**]{}, 79 (1993). A. Most, [*Proceedings of the 12$^{th}$ International Symposium on High-Energy Spin Physics*]{}, Amsterdam, The Netherlands (1996), p. 800. The code COMRAD generally conforms to the Fortran-77 specification except that it performs COMPLEX\*16 operations which are implemented in many compilers. The code has been successfully compiled and run with the IBM-AIX, SUN-Solaris, and DEC-OpenVMS operating systems.
Channel $E_e$ Acceptance ${{\cal A}}^{(0)}$ $({{\cal A}}-{{\cal A}}^{(0)})/{{\cal A}}^{(0)}$ (%)
--------- ------------------ -------------------- ------------------------------------------------------
7 17.14-18.02 GeV 0.7133 0.096
6 18.02-19.00 GeV 0.6483 0.097
5 19.00-20.11 GeV 0.5520 0.103
4 20.11-21.38 GeV 0.4309 0.118
3 21.38-22.83 GeV 0.2851 0.153
2 22.83-24.53 GeV 0.1228 0.285
1 24.53-26.51 GeV $-$.0396 $-$.673
: The effect of order$-\alpha^3$ radiative corrections upon the analyzing powers of the SLD Compton polarimeter.[]{data-label="tab:one"}
|
---
abstract: 'We introduce the concept of a $\mu$-scale invariant operator with respect to a unitary transformation in a separable complex Hilbert space. We show that if a nonnegative densely defined symmetric operator is $\mu$-scale invariant for some $\mu>0$, then both the Friedrichs and the Krein-von Neumann extensions of this operator are also $\mu$-scale invariant.'
address:
- ' Department of Mathematics, University of Missouri, Columbia, MO 65211, USA'
- ' Department of Mathematics, Niagara University, NY 14109, USA'
author:
- 'K. A. Makarov'
- 'E. Tsekanovskii'
title: 'On $\mu$-scale invariant operators'
---
Introduction
============
Given a unitary operator $U$ in a separable complex Hilbert space ${{\mathcal H}}$ and a (complex) number $\mu\in {\mathbb C}\setminus \{0\}$, we introduce the concept of a $\mu$-scale invariant operator $T$ (with respect to the transformation $U$) as a (bounded) “solution” of the following equation $$\label{bukv}
UTU^*=\mu T.$$ Note, that in this case $U$ and $T$ commute up to a factor, that is, $$\label{bukk}
UT=\mu TU,$$ and then necessarily $|\mu|=1$ (see [@BBP]), provided that $T$ is a bounded operator and $${\mathrm{spec}}(UT)\ne \{0\}.$$
The search for pairs of unitaries $U$ and $T$ satisfying the canonical (Heisenberg) commutation relations with $|\mu|=1$ leads to realizations of the rotation algebra, the $C^*$-algebra generated by the monomials $T^mU^n$, $m,n\in {\mathbb Z}$ (see, e.g., [@Sei]). The irreducible representations of this algebra play a crucial role in the study of the Hofstadter type models. For instance, the Hofstadter Hamiltonian $H=T+T^*+U+U^*$ typically has fractal spectrum that is rather sensitive to the algebraic properties of the “magnetic flux” $\theta$, $\mu=e^{i\theta}$, which is captured in the beauty of the famous Hofstadter butterfly (see [@Sei] and references therein). We also note that self-adjoint realizations $U$ and $T$ of commutation relations or for $|\mu|=1$ are obtained in [@BBP] while the case of contractive (not necessarily self-adjoint) solutions $T$, and unitary $U$, has been discussed in [@PT].
To incorporate the case of $|\mu|\ne 1$, where unbounded solutions to are of necessity considered, we extend the concept of the $\mu$-scale invariance to the case of unbounded operators $T$ by the requirement that ${\mathrm{Dom}}(T)$ is invariant, that is, $$\label{dom}
U^*{\mathrm{Dom}}(T)\subseteq{\mathrm{Dom}}(T),$$ and $$\label{self}
UTU^*f=\mu Tf \quad \text{ for all } f\in {\mathrm{Dom}}(T).$$
In this short Note we restrict ourselves to the case $\mu>0$ and focus on the study of symmetric as well as self-adjoint unbounded solutions $T$ of and . Our main result (see Theorem \[main\]) states that if a densely defined nonnegative (symmetric) operator $ T$ is $\mu$-scale invariant with respect to a unitary transformation $U$, then the two classical extremal nonnegative self-adjoint extensions, the Friedrichs and the Krein-von Neumann extensions, are $\mu$-scale invariant as well.
The paper is organized as follows: In Section 2, based on a result by Ando and Nishio [@AN], we provide the proof of Theorem \[main\]. Section 3 is devoted to further generalizations and a discussion of the $\mu$-scale invariance concept from the standpoint of group representation theory.
Main result
===========
Recall that if $\dot A$ is a densely defined (closed) nonnegative operator, then the set of all nonnegative self-adjoint extensions of $\dot A$ has the minimal element $A_K$, the Krein-von Neumann extension (different authors refer to the minimal extension $A_K$ by using different names, see, e.g., [@AS], [@AN], [@AT], [@Bir]), and the maximal one $A_F$, the Friedrichs extension. This means, in particular, that for any nonnegative self-adjoint extension $\tilde A$ of $\dot A$ the following operator inequality holds [@Kr1] $$(A_F+\lambda I)^{-1}\le (\tilde A+\lambda I)^{-1}\le (A_K+\lambda I)^{-1}, \quad
\text{ for all } \lambda >0.$$
The following result characterizes the Friedrichs and the Krein-von Neumann extensions a form convenient for our considerations.
\[ando\] Let $\dot A$ be a (closed) densely defined nonnegative symmetric operator. Denote by $\bf a$ the closure[^1] of the quadratic form $$\label{kvf}
{\dot {\bf a}}[f]=(\dot A
f, f), \quad {\mathrm{Dom}}[{\dot {\bf a}}]={\mathrm{Dom}}(\dot A).$$ Then,
- the Friedrichs extension $A_F$ of $\dot A$ coincides with the restriction of the adjoint operator $\dot A^*$ on the domain $${\mathrm{Dom}}(A_F)={\mathrm{Dom}}(\dot A^*)\cap {\mathrm{Dom}}[{\bf a}];$$
- the Krein-von Neumann extension $A_K$ of $\dot A$ coincides with the restriction of the adjoint operator $\dot A^*$ on the domain $
{\mathrm{Dom}}(A_K)
$ which consists of the set of elements $f$ for which there exists a sequence $\{f_n\}_{n\in {\mathbb N}}$, $f_n\in {\mathrm{Dom}}(\dot A)$, such that $$\lim_{n,m\to \infty}{\bf a}[f_n-f_m]=0
\quad \text{and}\quad
\lim_{n\to \infty}\dot Af_n=\dot A^*f.$$
We now state the main result of this Note:
\[main\] Assume that $\mu>0$ and that a densely defined (closed) nonnegative symmetric operator $\dot A$ is $\mu$-scale invariant with respect to a unitary transformation $U$; that is, $$U^*{\mathrm{Dom}}(\dot A)\subseteq {\mathrm{Dom}}(\dot A) )$$ and that $$U\dot A U^*=\mu \dot A\quad \text{on } {\mathrm{Dom}}(\dot A).$$
Then
- the adjoint operator $\dot A^*$,
- the Friedrichs extension $A_F$ of $\dot A$, and
- the Krein-von Neumann extension $A_K$ of $\dot A$
are $\mu$-scale invariant with respect to the unitary transformation $U$.
Clearly, it is sufficient to prove (i) followed by the proof of the fact that the domains of both the Friedrichs and the Krein-von Neumann extensions are invariant with respect to the operator $U^*$.
(i). Given $f\in {\mathrm{Dom}}(\dot A)$ and $h\in {\mathrm{Dom}}(\dot A^*)$, one obtains $$\begin{aligned}
(\dot Af, U^*h)&=(U\dot Af, h)=(U\dot AU^*Uf, h)
{\nonumber}\\&=(\mu\dot AUf,h)=(Uf,\mu\dot A^*h)=(f,U^*\mu \dot A^*h),\end{aligned}$$ thereby proving the inclusion $U^*{\mathrm{Dom}}(\dot A^*) \subseteq {\mathrm{Dom}}(
\dot A^*)$ as well as the equality $$\label{adj}
\dot A^*U^*h=\mu U^*\dot A^*h,\quad h\in{\mathrm{Dom}}(\dot A).$$ The proof of (i) is complete.
(ii). First we show that the domain of the closure of the quadratic form is invariant with respect to operator the $U^*$.
Recall that $f\in {\mathrm{Dom}}[{\bf a}]$ if and only if there exists a sequence $\{f_n\}_{n\in {\mathbb N}}$, $f_n\in {\mathrm{Dom}}(\dot A)$, such that $$\lim_{n,m\to \infty} \dot{\bf a}[f_n-f_m]=0 \quad \text{and}\quad
\lim_{n\to \infty} f_n= f.$$ Take an $f\in {\mathrm{Dom}}[{\bf a}]$ and a sequence $\{f_n\}_{n\in {\mathbb N}}$ satisfying the properties above. Clearly $$\label{uf}
\lim_{n\to \infty} U^*f_n= U^*f,$$ with $U^*f_n\in {\mathrm{Dom}}(\dot A)$. Moreover, $$\begin{aligned}
{\bf a}[U^*f_n-U^*f_m]&=(\dot AU^*(f_n-f_m),U^*(f_n-f_m))
=
(\dot UAU^*(f_n-f_m),(f_n-f_m))
\\ &=\mu(\dot A(f_n-f_m),(f_n-f_m))
= \mu{\bf a}[f_n-f_m].
\end{aligned}$$ Since $
\lim_{n,m\to \infty}{\bf a}[f_n-f_m]=0
$, one proves that $$\lim_{n,m\to \infty}{\bf a}[U^*f_n-U^*f_m]=0$$ which together with implies that $U^*f\in {\mathrm{Dom}}[{\bf a}]$. Hence, we have proven the inclusion $$\label{incl}
U^*{\mathrm{Dom}}[{\bf a}]\subseteq {\mathrm{Dom}}[{\bf a}].$$
Next, by (i) the domain ${\mathrm{Dom}}(\dot A^*)$ is invariant with respect to $U^*$. This combined with and Theorem \[ando\] (i) proves that the domain of the Friedrichs extension $A_F$ of $\dot A$ is invariant with respect to the operator $U^*$. Therefore, $A_F$ is $\mu$-scale invariant as a restriction of the $\mu$-scale invariant operator $\dot A^*$ onto a $U^*$-invariant domain.
(iii). Analogously, in order to show that the Krein-von Neumann extension $A_K$ is $\mu$-scale invariant with respect to the transformation $U$, it is sufficient to show that its domain is invariant with respect to $U^*$.
Take $f\in {\mathrm{Dom}}(A_K)$. By Theorem \[ando\] (ii) there exists an ${\bf a}$-Cauchy sequence[^2] $\{f_n\}_{n\in {\mathbb N}}$, $f_n\in {\mathrm{Dom}}(\dot A)$, such that $$\label{cauchy}
\lim_{n\to \infty}\dot A f_n=\dot A^*f.$$ From it follows that $$\label{figi}
\dot A U^*f_n=\dot A^*U^*f_n=\mu U^*\dot A^* f_n=\mu U^*\dot Af_n\quad \text{and}
\quad \dot A^*U^*f=\mu
U^*\dot A^*f.$$ Combining and , for the ${\bf a}$-Cauchy sequence $\{U^*f_n\}_{n\in {\mathbb N}}$ one gets $$\lim_{n\to \infty}\dot A U^*f_n =
\mu
U^*\dot A^*f
=\dot A^*U^*f$$ proving that $U^*f\in {\mathrm{Dom}}(A_K)$ by Theorem \[ando\] (ii).Thus, ${\mathrm{Dom}}(A_K)$ is $U^*$- invariant and, therefore, the Krein-von Neumann extension $A_K$ is $\mu$-scale invariant as a restriction of the $\mu$-scale invariant operator $\dot A^*$ onto a $U^*$-invariant domain.
We remark that the concept of $\mu$-scale invariant operators can immediately be extended to the case of liner relations: we say that a linear relation $S$ is $\mu$-scale invariant with respect to the unitary transformation $U$ if its domain is $U^*$-invariant and $(f,g)\in S$ implies $(U^*f,\mu U^*g)\in S$.
Recall that the Friedrichs extension $S_F$ of a semi-bounded from below relation $S$ is defined as the restriction of $S^*$ onto the domain of the closure of the quadratic form associated with the operator part of $S$ [@cod] and the Krein-von Neumann extension $S_K$ is defined by $$\label{lrlr}
S_K=\left(( S^{-1})_F \right)^{-1},$$ provided that $S$ is, in addition, nonnegative [@CS] (no care should be taken about inverses, for they always exist).
Assume that a nonnegative linear relation $S$ is $\mu$-scale invariant. Almost literally repeating the arguments of the proof of Theorem \[main\] (i) one concludes that the adjoint relation $S^*$ is also $\mu$-scale invariant. Given the above characterization of the Friedrichs extension of a semi-bounded relation, applying Theorem \[main\] (ii) proves the $\mu$-scale invariance of $S_F$. As it follows from , a simple observation that $S$ is $\mu$-scale invariant if and only if the inverse relation $S^{-1}$ is $\mu^{-1}$-scale invariant ensures that the Krein-von Neumann extension $S_K$ of $S$ is also $\mu$-scale invariant. Thus, Part (iii) of Theorem \[main\] is a direct consequence of Parts (i) and (ii) up to the representation theorem that states that Krein-von Neumann extension $A_K$ of a nonnegative densely defined symmetric operator $\dot A$ can be “evaluated" as $$\label{krkr}
A_K=\left(( \dot A^{-1})_F \right)^{-1},$$ with $\dot A^{-1}$ being understood as a linear relation (for the proof of we refer to [@CS], also see [@AN] and [@AT]).
\[vottak\] Note without proof that if the symmetric nonnegative operator $\dot A$ referred to in Theorem \[main\] has deficiency indices $(1,1)$ the Friedrichs and the Krein-von Neumann extensions of $\dot A$ are the only ones $\mu$-scale invariant self-adjoint extensions.
The following simple example illustrates the statement of Theorem \[main\].
Assume that $\mu>0$, $\mu\ne1$, and that $U$ is the unitary scaling transformation on the Hilbert space ${{\mathcal H}}=L^2(0,\infty)$ defined by $$(Uf)(x)= \mu^{-\frac{1}{4}}f(\mu^{-\frac{1}{2}}x), \quad f\in L^2(0,\infty).$$ $T$ is the maximal operator on the Sobolev space $H^{2,2}(0,\infty)$ defined by $$T=-\frac{d^2}{dx^2},\qquad {\mathrm{Dom}}(T)=H^{2,2}(0,\infty).$$ Let $A_F$ and $A_K$ be the restrictions of $T$ onto the domains $${\mathrm{Dom}}(A_F)=\{f\in {\mathrm{Dom}}(T)\,|\,f(0)=0\}$$ and $${\mathrm{Dom}}(A_K)=\{f\in {\mathrm{Dom}}(T)\,|\,f'(0)=0\}$$ respectively. Denote by $\dot A$ the restriction of $T$ onto the domain $${\mathrm{Dom}}(\dot A)={\mathrm{Dom}}(A_F)\cap {\mathrm{Dom}}(A_K).$$ It is well known that $\dot A$ is a closed nonnegative symmetric operator with deficiency indices $(1,1)$ and that $A_F$ and $A_K$ are the Friedrichs and the Krein-von Neumann extensions of $\dot A $ respectively and $T=\dot A^*$. A straightforward computation shows that all the operators $\dot A$, $A_F$, $A_K$ and $T$ are $\mu$-scale invariant with respect to the transformation $ U$. Moreover, note that any other nonnegative self-adjoint extensions of $\dot A$ different from the extremal ones, $A_F$ and $A_K$, can be obtained by the restriction of $T$ onto the domain (see, e.g., [@Nai], also see [@DM] and [@DMF]) $${\mathrm{Dom}}(\tilde A_s)=\{f\in {\mathrm{Dom}}(T)\,|\,f'(0)=sf(0)\}, \quad\text{ for some } s>0,$$ which is obviously not $U^*$-invariant. Thus, the operator $\dot A$ admits the only two $\mu$-scale invariant extensions, the Friedrichs and the Krein-von Neumann extensions (cf. Remark \[vottak\]).
Concluding remarks
===================
We remark that any $\mu$-scale invariant operator $T$ with respect to a unitary transformation $U$ is also $\mu^n$-scale invariant with respect to the (unitary) transformations $U^n$, $n=0, 1, ...\, $. That is, $$\label{utut}
U^nTU^{-n}=\mu^n T,\quad \text{ for all } n\in \{0\}\cup{\mathbb N}.$$ If, in addition, $$U^*{\mathrm{Dom}}(T)={\mathrm{Dom}}(T),$$ then relation holds for all $n\in {\mathbb Z}$. Thus, we naturally arrive at a slightly more general concept of scale invariance with respect to a one-parameter unitary representation of the additive group ${{\mathcal G}}$ (${{\mathcal G}}={\mathbb N}$ or ${{\mathcal G}}={\mathbb R}$): [*Given a character $ \mu $, $\mu:G\to {\mathbb C}$, of the group ${{\mathcal G}}$ and its one-parameter unitary representation $g\mapsto U_g$, a densely defined operator $T$ is said to be $\mu$-character-scale invariant with respect to the representation $U_g$ if $$U_g{\mathrm{Dom}}(T)={\mathrm{Dom}}(T),\quad g\in {{\mathcal G}},$$ and $$\label{ututu}
U_gTU_{-g}=\mu(g) T,\quad \text{on} \quad {\mathrm{Dom}}(T), \quad g\in {{\mathcal G}}.$$* ]{}
Clearly, an appropriate version of Theorem \[main\] can almost literally be restated in this more general setting. It is also worth mentioning that upon introducing the representation $V_g=\mu^gU_g$, $g\in {{\mathcal G}}$, one can rewrite in the form $$\label{mumu}
U_gT=TV_g, \quad g\in {{\mathcal G}},$$ and we refer the interested reader to the papers [@LJ] and [@PT] where commutation relations for general groups ${{\mathcal G}}$ with not necessarily unitary representations $U_g$ and $V_g$, $g\in {{\mathcal G}}$, of the group ${{\mathcal G}}$ are discussed.
Note that an infinitesimal analog of the commutation relation in is also available provided that ${{\mathcal G}}={\mathbb R}$ and the unitary representation $U_t$, $t\in {\mathbb R}$, is strongly continuous. In this case infinitesimal version of can heuristically be written down as the following commutation relation $$\label{lie}
[B,T]=i\hbar T,$$ with $[\cdot,\cdot]$ the usual commutator and $$\label{con}
\hbar=-\log \mu,$$ the structure constant of the simplest noncommutative two-dimesional Lie algebra and . Here $B$ is the infinitesimal generator of the group $U_t$, so that $U_t=e^{iBt}$, $t\in {\mathbb R}$. And in conclusion, note that Theorem \[main\] paves the way for realizations of the Lie algebra by self-adjoint operators, provided that some “trial” symmetric realizations of the Lie algebra are available. 0.3cm [**Acknowledgments**]{}. We would like to thank Steve Clark and Fritz Gesztesy for useful discussions.
[17]{}
N. I. Akhiezer and I. M. Glazman, Theory of Linear Operators in Hilbert Space. Dover, New York, 1993.
S. Alonso and B. Simon, [*The Birman–Krein–Vishik theory of self-adjoint extensions of semibounded operators*]{}. J. Operator Theory, [**4**]{} (1980), 251–270.
T. Ando, K. Nishio, [*Positive Selfadjoint Extensions of Positive Symmetric Operators*]{}. T[\^ o]{}hoku Math. Journ., [**22**]{} (1970), 65–75.
Yu. Arlinskii and E. Tsekanovskii, [*The von Neumann problem for nonnegative symmetric operators*]{}. Int. Eq. Oper. Theory, [**51**]{} (2005), 319–356.
M. Sh. Birman, [*On the theory of self-adjoint extensions of positive definite operators*]{}. (Russian) Mat. Sb. N. S., [**38(80)**]{} (1956), 431–450.
J. A. Brooke, P. Busch, D. B. Pearson, [*Commutativity up to a factor of bounded operators in complex Hilbert space.*]{} R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., [**458**]{} (2002), no. 2017, 109–118.
E. A. Coddington, [*Extension theory of formally normal and symmetric subspaces.*]{} Mem. AMS [**134**]{}, 1973.
E. A. Coddington, H. S. V. de Snoo, [*Positive selfadjoint extensions of positive symmetric subspaces.*]{} Math. Z., [**159**]{} (1978), 203–214.
V. A. Derkach, M. M. Malamud and E. R. Tsekanovskii, [*Sectorial extension of a positive operator and the characteristic function*]{}. Soviet Math. Dokl. , [**37**]{} 1 (1988), 106–110.
V. A. Derkach, M. M. Malamud, [*Generalized resolvents and the boundary value problems for Hermitian operators with gaps*]{}. J. Funct. Anal. , [**95**]{} (1991), 1–95.
M. G. Krein, [*The Theory of Self-Adjoint Extensions of Semibounded Hermitian Transformations and its Applications*]{}. (Russian) I. Mat. Sb. , [**20**]{} (1947), 431-495.
M. Livsic and A. Jantsevich, Theory of operator colligations in Hilbert spaces. (Russian) Kharkov University Press, 1971.
M. A. Naimark, Linear differential operators. Part I: Elementary theory of linear differential operators. Frederick Ungar Publishing Co., New York, 1967.
A. P. Filimonov and E. R. Tsekanovskii, [*Authomorphically invariant operator colligations and factorization of their characteristic operator-valued functions*]{}. (Russian) Funktsional. Anal. i Prilozhen., [**21**]{}, 4 (1987), 94–95.
Ch. Kreft and R. Seiler, [*Models of the Hofstadter-type*]{}. J. Math. Phys. [**37**]{} (1996), no. 10, 5207–5243.
[^1]: Recall that $f\in {\mathrm{Dom}}[{\bf a}]$ if and only if there exists a sequence $\{f_n\}_{n\in {\mathbb N}}$, $f_n\in {\mathrm{Dom}}(\dot A)$, such that\
$\lim_{n,m\to \infty}
\dot {\bf a}[f_n-f_m]=0$ and $\lim_{n\to \infty} f_n= f.$
[^2]: in the “metric” generated by the form ${\bf a}$
|
---
author:
- |
Paula Harring 1em Ralf K[ö]{}hl\
With appendices\
by Tobias Hartnick and Ralf K[ö]{}hl and\
by Julius Grüning and Ralf Köhl
title: 'Fundamental groups of split real Kac–Moody groups and generalized real flag manifolds'
---
[0pt]{} [****]{} [.]{} [5pt plus 1pt minus 1pt]{}
[ \[section\] ]{}
[ \[thm\][Lemma]{} \[thm\][Proposition]{} \[thm\][Corollary]{} ]{}
[ \[thm\][Remark]{} \[thm\][Notation]{} \[thm\][Definition and Remark]{} \[thm\][Definition]{} \[thm\][Example]{} \[thm\][Observation]{} ]{}
[ ]{}
Introduction {#setting}
============
The structure of maximal compact subgroups in semisimple Lie groups was investigated by Cartan and, later, Mostow: In [@Mos], Mostow gives a new proof of a Cartan’s theorem stating that a connected semisimple Lie group $G$ is a topological product of a maximal compact subgroup $K$ and a Euclidean space, implying in particular that $G$ and $K$ have isomorphic fundamental groups. Subsequent case-by-case analysis provided the isomorphism types of these maximal compact subgroups and their fundamental groups; tables can be found in [@Helga p 518] and [@Salz 94.33].
Starting in the 1940’s, Dynkin diagrams, introduced in [@Dynkin], have been used to describe the structure of simple Lie groups. In this article, we present a uniform result which makes it possible to determine the fundamental group of any algebraically simply connected split real simple Lie group – and, more generally, any algebraically simply-connected semisimple split real topological Kac–Moody group – directly from its Dynkin diagram.
In [@Ti Theorem 1], Tits associates with every generalized Cartan matrix ${\mathbf{A}}$ and every commutative ring $k$ a group ${G}_k({\mathbf{A}})$. Let $\Pi$ be the Dynkin diagram of ${\mathbf{A}}$.
\[Kac–Moody group\] We set $G(\Pi) := [{G}_{\mathbb{R}}({\mathbf{A}}),{G}_{\mathbb{R}}({\mathbf{A}})]$ and refer to this group as the *algebraically simply-connected semisimple split real Kac–Moody group of type $\Pi$*.
Kac–Moody groups endowed with the Kac–Peterson topology have been studied extensively by the second author together with Glöckner and Hartnick in [@GGH] and with Hartnick and Mars in [@HKM]. Our result is applicable to those Kac–Moody groups whose Bruhat decompositions are CW decompositions and for which the embedding $K \hookrightarrow G$ is a weak homotopy equivalence.
In order to fix notations, let $G=G(\Pi)$ be the algebraically simply-connected split real semisimple Kac–Moody group associated to an irreducible diagram $\Pi = (V,E)$ endowed with the Kac–Peterson topology (for definitions, see Section \[Kac–Moody-basics\]). Let $K=K(\Pi)$ be the so-called maximal compact subgroup of $G(\Pi)$, i.e., the subgroup fixed by the Cartan–Chevalley involution $\theta$ of $G(\Pi)$.
Given the Dynkin diagram $\Pi = (V,E)$ with a fixed labelling $\lambda:\{1, \dots, n\} =: I \to V $, we define a modified diagram ${\Pi^{\mathrm{adm}}}$ with vertex set $V$ and $\{i^\lambda,j^\lambda\} \in V \times V$ edge if and only if ${\varepsilon}(i,j) = {\varepsilon}(j,i) = -1$, where ${\varepsilon}(i,j)$ denotes the parity of the corresponding Cartan matrix entry. To each connected component $\bar{\Pi}^{\mathrm{adm}}$ of ${\Pi^{\mathrm{adm}}}$ we then assign a colour as follows: Let $\bar{\Pi}^{\mathrm{adm}}$ be coloured red (denoted by $r$) if it contains a vertex $i^\lambda$ such that there exists a vertex $j^\lambda \in V$ satisfying ${\varepsilon}(i,j) = 1$ and $ {\varepsilon}(j,i) = -1$. Let $\bar{\Pi}^{\mathrm{adm}}$ be coloured green ($g$) if it consists only of an isolated vertex, and blue ($b$) else.
One can then read off the isomorphism type of $\pi_1(G(\Pi))$ from the coloured diagram ${\Pi^{\mathrm{adm}}}$ as specified in the following theorem.
Let $\Pi$ be an irreducible Dynkin diagram such that the Bruhat decomposition of $G(\Pi)$ provides a CW decomposition (i.e., such that the conclusion of Proposition \[bruhat decomp is cw decomp\] holds) and such that the embeding $K \hookrightarrow G(\Pi)$ is a weak homotopy equivalence (i.e., such that the conclusion of Theorem \[FundamentalGroups2\] holds). Let $n(g)$ and $n(b)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ of colour $g$ and $b$, respectively. Then $$\pi_1(G(\Pi)) {\cong}{\mathbb{Z}}^{n(g)} \times C_2^{n(b)}.$$ In particular, this statement holds in the symmetrizable case.
$$\begin{array}{|c|c|c|}
\hline \rule{0pt}{\normalbaselineskip}
\Pi & {\Pi^{\mathrm{adm}}}\;\mathrm{coloured\;by}\;\gamma & \pi_1(G(\Pi))\\
\hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$A_1$};
\node[dnode] (1) at (0,0) {};
;
\end{tikzpicture}
&
\begin{tikzpicture} \node[gnode,label=center:$g$] (1) at (0,0) {};
;
\end{tikzpicture}&
\pi_1({\mathrm{SL}}_{2}({\mathbb{R}})) {\cong}{\mathbb{Z}}{\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$A_n$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\node[dnode] (3) at (3,0) {};
\node[dnode] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(3) edge[sedge] (4)
;
\end{tikzpicture}
&
\begin{tikzpicture} \node[bnode,label=center:$b$] (1) at (0,0) {};
\node[bnode,label=center:$b$] (2) at (1,0) {};
\node[bnode,label=center:$b$] (3) at (3,0) {};
\node[bnode,label=center:$b$] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(3) edge[sedge] (4)
;
\end{tikzpicture}&
\pi_1({\mathrm{SL}}_{n+1}({\mathbb{R}})) {\cong}C_2 \quad (n\geq 2) {\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$B_n$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\node[dnode] (3) at (3,0) {};
\node[dnode] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(3) edge[dedge] (4)
;
\end{tikzpicture}&
\begin{tikzpicture}
\node[bnode,label=center:$b$] (1) at (0,0) {};
\node[bnode,label=center:$b$] (2) at (1,0) {};
\node[bnode,label=center:$b$] (3) at (3,0) {};
\node[rnode,label=center:$r$] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
;
\end{tikzpicture}& \pi_1({\mathrm{Spin}}(n,n+1)) {\cong}\begin{cases} {\mathbb{Z}}&\mathrm{if }\;n\leq 2,\\C_2&\mathrm{if }\; n>2.\end{cases}{\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$C_n$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\node[dnode] (3) at (3,0) {};
\node[dnode] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(4) edge[dedge] (3)
;
\end{tikzpicture}
&
\begin{tikzpicture}
\node[rnode,label=center:$r$] (1) at (0,0) {};
\node[rnode,label=center:$r$] (2) at (1,0) {};
\node[rnode,label=center:$r$] (3) at (3,0) {};
\node[gnode,label=center:$g$] (4) at (4,0) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
;
\end{tikzpicture}& \pi_1({\mathrm{Sp}}(2n,{\mathbb{R}})) {\cong}{\mathbb{Z}}{\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$D_n$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\node[dnode] (3) at (3,0) {};
\node[dnode] (4) at (4,0) {};
\node[dnode] (5) at (3,1) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(3) edge[sedge] (4)
(3) edge[sedge] (5)
;
\end{tikzpicture}&
\begin{tikzpicture}
\node[bnode,label=center:$b$] (1) at (0,0) {};
\node[bnode,label=center:$b$] (2) at (1,0) {};
\node[bnode,label=center:$b$] (3) at (3,0) {};
\node[bnode,label=center:$b$] (4) at (4,0) {};
\node[bnode,label=center:$b$] (5) at (3,1) {};
\path (1) edge[sedge] (2)
(2) edge[sedge,dashed] (3)
(3) edge[sedge] (4)
(3) edge[sedge] (5)
;
\end{tikzpicture}& \pi_1({\mathrm{Spin}}(n,n)) {\cong}C_2 \quad (n\geq 3) {\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$F_4$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\node[dnode] (3) at (2,0) {};
\node[dnode] (4) at (3,0) {};
\path (1) edge[sedge] (2)
(3) edge[dedge] (2)
(3) edge[sedge] (4)
;
\end{tikzpicture}
&
\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$F_4$};
\node[rnode,label=center:$r$] (1) at (0,0) {};
\node[rnode,label=center:$r$] (2) at (1,0) {};
\node[bnode,label=center:$b$] (3) at (2,0) {};
\node[bnode,label=center:$b$] (4) at (3,0) {};
\path (1) edge[sedge] (2)
(3) edge[sedge] (4)
;
\end{tikzpicture}&
\pi_1(F_4) {\cong}C_2 {\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,0) node[anchor=east] {$G_2$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (1,0) {};
\path (1) edge[tedge] (2)
;
\end{tikzpicture}
&
\begin{tikzpicture} \node[bnode,label=center:$b$] (1) at (0,0) {};
\node[bnode,label=center:$b$] (2) at (1,0) {};
\path (1) edge[sedge] (2)
;
\end{tikzpicture}&
\pi_1(G_{2,2}) {\cong}C_2 \\ \hline \end{array}$$
$$\begin{array}{|c|c|c|}
\hline \rule{0pt}{\normalbaselineskip}
\Pi & {\Pi^{\mathrm{adm}}}\;\mathrm{coloured\;by}\;\gamma & \pi_1(G(\Pi))\\
\hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,-3.5) node[anchor=east] {$E_{10}$};
\node[dnode] (1) at (0,0) {};
\node[dnode] (2) at (0,-1) {};
\node[dnode] (3) at (0,-2) {};
\node[dnode] (4) at (0,-3) {};
\node[dnode] (5) at (0,-4) {};
\node[dnode] (6) at (0,-5) {};
\node[dnode] (7) at (0,-6) {};
\node[dnode] (8) at (0,-7) {};
\node[dnode] (9) at (0,-8) {};
\node[dnode] (10) at (1,-6) {};
\path (1) edge[sedge] (2)
(2) edge[sedge] (3)
(3) edge[sedge] (4)
(4) edge[sedge] (5)
(5) edge[sedge] (6)
(6) edge[sedge] (7)
(7) edge[sedge] (8)
(8) edge[sedge] (9)
(7) edge[sedge] (10)
;
\end{tikzpicture}
&
\begin{tikzpicture}
\node[bnode,label=center:$b$] (1) at (0,0) {};
\node[bnode,label=center:$b$] (2) at (0,-1) {};
\node[bnode,label=center:$b$] (3) at (0,-2) {};
\node[bnode,label=center:$b$] (4) at (0,-3) {};
\node[bnode,label=center:$b$] (5) at (0,-4) {};
\node[bnode,label=center:$b$] (6) at (0,-5) {};
\node[bnode,label=center:$b$] (7) at (0,-6) {};
\node[bnode,label=center:$b$] (8) at (0,-7) {};
\node[bnode,label=center:$b$] (9) at (0,-8) {};
\node[bnode,label=center:$b$] (10) at (1,-6) {};
\path (1) edge[sedge] (2)
(2) edge[sedge] (3)
(3) edge[sedge] (4)
(4) edge[sedge] (5)
(5) edge[sedge] (6)
(6) edge[sedge] (7)
(7) edge[sedge] (8)
(8) edge[sedge] (9)
(7) edge[sedge] (10)
;
\end{tikzpicture}& \pi_1(E_{10}) {\cong}C_2 {\rule{0pt}{2\normalbaselineskip}}\\ \hline {\rule{0pt}{2\normalbaselineskip}}\begin{tikzpicture} \draw (-0.5,-2.5) node[anchor=east] {$X$};
\node[dnode] (1a) at (0,0) {};
\node[dnode] (1b) at (1,0) {};
\node[dnode] (1c) at (2,0) {};
\node[dnode] (2a) at (0,-1) {};
\node[dnode] (2b) at (1,-1) {};
\node[dnode] (2c) at (2,-1) {};
\node[dnode] (3a) at (0,-2) {};
\node[dnode] (3b) at (1,-2) {};
\node[dnode] (3c) at (2,-2) {};
\node[dnode] (4a) at (0,-3) {};
\node[dnode] (4b) at (1,-3) {};
\node[dnode] (4c) at (2,-3) {};
\node[dnode] (5a) at (0,-4) {};
\node[dnode] (5b) at (1,-4) {};
\node[dnode] (5c) at (2,-4) {};
\node[dnode] (6a) at (0,-5) {};
\path (1a) edge[tedge] (1b)
(1b) edge[sedge] (1c)
(2a) edge[dedge] (1a)
(2a) edge[dedge] (2b)
(2c) edge[dedge] (1c)
(2c) edge[dedge] (2b)
(2c) edge[dedge] (3c)
(3a) edge[sedge] (2a)
(3b) edge[dedge] (3a)
(3b) edge[dedge] (3c)
(3b) edge[dedge] (4b)
(3c) edge[tedge] (4c)
(4a) edge[dedge] (3a)
(4a) edge[dedge] (4b)
(4a) edge[sedge] (5a)
(5b) edge[dedge] (4b)
(5b) edge[sedge] (5c)
(5a) edge[sedge] (6a)
;
\end{tikzpicture}
&
\begin{tikzpicture} \node at (0,.5){};
\node[rnode,label=center:$r$] (1a) at (0,0) {};
\node[rnode,label=center:$r$] (1b) at (1,0) {};
\node[rnode,label=center:$r$] (1c) at (2,0) {};
\node[rnode,label=center:$r$] (2a) at (0,-1) {};
\node[rnode,label=center:$r$] (2b) at (1,-1) {};
\node[gnode,label=center:$g$] (2c) at (2,-1) {};
\node[rnode,label=center:$r$] (3a) at (0,-2) {};
\node[gnode,label=center:$g$] (3b) at (1,-2) {};
\node[rnode,label=center:$r$] (3c) at (2,-2) {};
\node[bnode,label=center:$b$] (4a) at (0,-3) {};
\node[rnode,label=center:$r$] (4b) at (1,-3) {};
\node[rnode,label=center:$r$] (4c) at (2,-3) {};
\node[bnode,label=center:$b$] (5a) at (0,-4) {};
\node[bnode,label=center:$b$] (5b) at (1,-4) {};
\node[bnode,label=center:$b$] (5c) at (2,-4) {};
\node[bnode,label=center:$b$] (6a) at (0,-5) {};
\path (1a) edge[sedge] (1b)
(1b) edge[sedge] (1c)
(3a) edge[sedge] (2a)
(3c) edge[sedge] (4c)
(4a) edge[sedge] (5a)
(5b) edge[sedge] (5c)
(5a) edge[sedge] (6a)
;
\end{tikzpicture}& \pi_1(G(X)) {\cong}{\mathbb{Z}}^2 \times C_2^2 \\ \hline
\end{array}$$
While in the classical case, one has a topological Iwasawa decomposition $G = K\times A \times U_+$ with $A$ and $U_+$ contractible, implying $\pi_1(K) {\cong}\pi_1(G)$, it is currently unknown whether the corresponding Iwasawa decomposition in the general Kac–Moody case is also topological. However, using a fibration result by Palais, in the appendix Hartnick and the second author prove that isomorphism of fundamental groups still holds in the general symmetrizable case, therefore reducing the problem to the computation of $\pi_1(K)$.
In [@GHKW Section 16], the group ${{\mathrm{Spin}}(\Pi,\kappa)}$ – where $\kappa$ denotes a so-called *admissible colouring* of the vertices of $\Pi$ – is defined as the canonical universal enveloping group of a ${\mathrm{Spin}}(2)$-amalgam $\mathcal{A}(\Pi, {\mathrm{Spin}}(2)) = \{\tilde{G}_{ij}, \tilde{\oldphi}_{ij}^i \mid i \neq j \in I\}$ where the isomorphism type of $\tilde{G}_{ij}$ depends on the $(i,j)$- and $(j,i)$-entries of the Cartan matrix of $\Pi$ as well as the values of $\kappa$ on the corresponding vertices.
It is shown in [@GHKW Section 17] that there exists a finite central extension ${\mathrm{Spin}}(\Pi, \kappa) \to K(\Pi)$ which implies that the subspace topology on $K(\Pi)$ defines a unique topology on ${\mathrm{Spin}}(\Pi, \kappa)$ that turns the extension into a covering map. The resulting group topology on ${\mathrm{Spin}}(\Pi, \kappa)$ is called the [*Kac–Peterson topology*]{} on ${\mathrm{Spin}}(\Pi, \kappa)$.
In the simply-laced case, there is a unique non-trivial admissible colouring $\kappa$ and the corresponding group ${\mathrm{Spin}}(\Pi):= {\mathrm{Spin}}(\Pi,\kappa)$ double-covers $K$ as shown in [@GHKW]. We prove here that ${\mathrm{Spin}}(\Pi)$ is simply connected which then implies that $\pi_1(K) {\cong}C_2$.
The strategy of proof in the simply-laced case is to study fibre bundles of the form $${\mathrm{Spin}}(3) \to {\mathrm{Spin}}(\Pi) \to {\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(3),$$ which yield exact sequences of the form $$\{1\} = \pi_1({\mathrm{Spin}}(3)) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(3)).$$ One can use the building in order to prove that the fibre bundles\
${\mathrm{Spin}}(3) \to {\mathrm{Spin}}(\Pi)\to {\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(3)$ arising from the embedding of ${\mathrm{Spin}}(3)$ along a subdiagram of type $A_2$ indeed satisfy $\pi_1({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(3))=\{1\}$, which then establishes the simple-connectedness of ${\mathrm{Spin}}(\Pi)$.
A key to the proof both in the simply-laced and in the general case is the computation of the fundamental groups of *generalized flag varieties* – that is, spaces of the form $G/P_J$ for a parabolic subgroup $P_J$ of $G$ corresponding to an index subset $J \subseteq I$. We prove the following theorem:
Let $\Pi$ be an irreducible Dynkin diagram such that the Bruhat decomposition of $G(\Pi)$ provides a CW decomposition (i.e., such that the conclusion of Proposition \[bruhat decomp is cw decomp\] holds) and let $J \subseteq I$. Then a presentation of $\pi_1(G/P_J)$ is given by
$$\left\langle x_i; \quad i \in I \mid x_ix_j^{{\varepsilon}(i,j) } = x_jx_i,\quad x_k = 1; \quad i,j \in I, k \in J \right\rangle.$$ In particular, this statement holds in the two-spherical and in the symmetrizable case.
We refer to [@Wig] for the analog result in the finite-dimensional situation.
In order to determine $\pi_1(K)$ in the general case, we compute subgroups of $\pi_1(K)$ corresponding to the index sets of connected components of ${\Pi^{\mathrm{adm}}}$ using the above theorem and covering maps of type $K/K_J \to K/(K \cap T)K_J$. We then shown that $\pi_1(K)$ is a direct product of these subgroups.
In a very similar way, the fundamental group of ${\mathrm{Spin}}(\Pi,\kappa)$ is determined, establishing the following theorem:
Let $\Pi$ be an irreducible Dynkin diagram such that the Bruhat decomposition of $G(\Pi)$ provides a CW decomposition (i.e., such that the conclusion of Proposition 3.5 holds). Let $n(g)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ of colour $g$. Let $n(b,\kappa)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ on which $\kappa$ takes the value 1 and which have colour $b$. Then $$\pi_1({\mathrm{Spin}}(\Pi,\kappa)) {\cong}{\mathbb{Z}}^{n(g)} \times C_2^{n(b,\kappa)}.$$ In particular, this statement holds in the two-spherical and in the symmetrizable case.
**Acknowledgements.** The research leading to this article has been partially funded by DFG via the project KO 4323/11. The authors thank Julius Grüning for various helpful remarks on an earlier version of this article.
Split-real Kac–Moody groups {#Split-real Kac--Moody groups}
===========================
\[Kac–Moody-basics\]
In [@Kac2 §1.3], Kac associates with every generalized Cartan matrix ${\mathbf{A}}= (a_{ij})_{1\leq i,j \leq n} \in {\mathbb{Z}}^{n\times n}$ a quadruple $({\mathfrak{g}}_{\mathbb{C}}({\mathbf{A}}), {\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}}), \Psi, \check{\Psi})$ of a complex Lie algebra ${\mathfrak{g}}_{\mathbb{C}}({\mathbf{A}})$, an abelian subalgebra ${\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})$ and linearly independent finite subsets $\Psi = \{\alpha_1, \dots, \alpha_n\} \subseteq {\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})^*$ and $\check{\Psi} = \{\check{\alpha}_1, \dots, \check{\alpha}_n\} \subseteq {\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})$ called *simple roots* and *simple coroots*, respectively, such that $\alpha_j(\check{\alpha}_i) = a_{ij}$. Associated with such a quadruple is a Lie algebra generating set $\{e_1, \dots, e_n,f_1,\dots,f_n\} \cup {\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})$. The complex Lie algebra ${\mathfrak{g}}_{\mathbb{C}}({\mathbf{A}})$ is called the *complex Kac–Moody algebra* associated with ${\mathbf{A}}$, and ${\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})$ its *standard Cartan subalgebra*.
Since $a_{ij} \in {\mathbb{R}}$, one can analogously define a quadruple $({\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}}), {\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}}), \Psi, \check{\Psi})$ where ${\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ is a real Lie algebra that embeds naturally into ${\mathfrak{g}}_{\mathbb{C}}({\mathbf{A}})$ as the real form given by the involution induced by complex conjugation. One refers to ${\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ as the *split real Kac–Moody algebra* associated with ${\mathbb{R}}$ and to ${\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})$ as its *standard split Cartan subalgebra*.
Let $Q \subseteq {\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})^*$ be the group generated by $\Psi$ and $Q_{\pm}$ the subsemigroups generated by $\pm \Psi$, respectively. For $k \in \{{\mathbb{C}}, {\mathbb{R}}\}$ and $\alpha \in {\mathfrak{h}}_k({\mathbf{A}})^*$ define the *root space* $${\mathfrak{g}}_{\alpha} := \{X \in {\mathfrak{g}}_k({\mathbf{A}}) \mid \forall H \in {\mathfrak{h}}_k({\mathbf{A}})^*: [H,X] = \alpha(H)X\}.$$ The *set $\Delta$ of ${\mathfrak{h}}_k({\mathbf{A}})$ roots in ${\mathfrak{g}}_k({\mathbf{A}})$* is defined as $\Delta:=\{\alpha \in Q \setminus \{0\} \mid {\mathfrak{g}}_\alpha^k \neq \{0\}\}$. One has the *root space decomposition* $${\mathfrak{g}}_k({\mathbf{A}}) = {\mathfrak{h}}_k({\mathbf{A}}) \oplus \bigoplus_{\alpha \in \Delta} {\mathfrak{g}}_\alpha^k.$$
The set $\Delta$ decomposes as a disjoint union into the subsets $\Delta_{\pm}:= \Delta \cap Q_{\pm}$.
For $i = \{1,\dots, n\}$ define the *fundamental root reflection* $\sigma_i \in {\mathrm{GL}}({\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})^*)$ by $$\sigma_i(\lambda):= \lambda -\lambda(\check{\alpha}_i)\alpha_i.$$ Then the *Weyl Group* of ${\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ is defined as $W:= \langle \sigma_1, \dots, \sigma_n\rangle \leq {\mathrm{GL}}({\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})^*)$ and forms a Coxeter system together with the set of fundamental root reflections. Finally, define the *set of real roots* $\Phi:= W.\Psi \subseteq \Delta$ and $\Phi^{\pm} := \Delta_{\pm} \cap \Phi$.
The construction in [@Ti] of ${G}_{\mathbb{R}}({\mathbf{A}})$ (see Definition \[Kac–Moody group\]) provides a representation of ${G}_{\mathbb{R}}({\mathbf{A}})$ on ${\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ by Lie algebra automorphisms, which is denoted by $$\operatorname{Ad}: {G}_{\mathbb{R}}({\mathbf{A}}) \to \operatorname{Aut}({\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})),$$ and referred to as the *adjoint representation* of ${G}_{\mathbb{R}}({\mathbf{A}})$. Since the subgroup $\operatorname{Ad}(G(\Pi))$ of $G(\Pi)$ under this representation preserves the commutator subalgebra ${\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}})$, one obtains an adjoint representation $$\operatorname{Ad}: G(\Pi) \to \operatorname{Aut}({\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}}))$$ for $G(\Pi)$. The kernels of the adjoint representations of ${G}_{\mathbb{R}}({\mathbf{A}})$ and $G(\Pi)$ are given by the respective centres.
An element $X \in {\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ is *$\operatorname{ad}$-locally-finite* if for every element $Y \in {\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})$ there exists an $\operatorname{ad}(X)$-invariant finite-dimensional subspace $W$ with $Y \in W$. As pointed out in [@Mar p. 64], this implies that $\left.\operatorname{ad}(X)\right|_W$ is a (finite) matrix in some basis of $W$, so the exponential $\exp(\operatorname{ad}(X))$ can be defined in the ususal way. By [@Ti (KMG5), p. 545] and the uniqueness properties of $G_{\mathbb{R}}({\mathbf{A}})$ established in [@Ti Theorem 1], $\exp(\operatorname{ad}(X)) \in \operatorname{Ad}(G_{\mathbb{R}}({\mathbf{A}}))$. Let $F_{{\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})}$ and $F_{{\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}})}$ be the subsets of $\operatorname{ad}$-locally-finite elements of the respective algebras. The maps $\exp: F_{{\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})} \to \operatorname{Ad}({G}_{\mathbb{R}}({\mathbf{A}}))$ and $\exp: F_{{\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}})} \to \operatorname{Ad}(G(\Pi))$ given by $X\mapsto \exp(\operatorname{ad}(X))$ can be lifted to exponential functions $\exp: F_{{\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})} \to {G}_{\mathbb{R}}({\mathbf{A}})$ and $\exp: F_{{\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}})} \to G(\Pi)$.
For $X \in {\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})\subseteq F_{{\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}})}$, one has $$\begin{aligned}
\mathrm{ad}(X)(e_i) = [X,e_i] = \alpha_i(X) e_i,& \quad\quad & \mathrm{Ad}(\exp(X))(e_i) = e^{\alpha_i(X)}\cdot e_i,\label{AdRootSpaces} \\
\mathrm{ad}(X)(f_i) = [X,f_i] = -\alpha_i(X) f_i,& \quad\quad & \mathrm{Ad}(\exp(X))(e_i) = e^{-\alpha_i(X)}\cdot f_i, \end{aligned}$$ cf. [@Kum Section 6.1.6], [@Ti (KMG5), p. 545]. The same constructions apply also to ${\mathbb{C}}$ instead of ${\mathbb{R}}$. Since ${\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}}) \subseteq F_{{\mathfrak{g}}_{\mathbb{C}}({\mathbf{A}})}$, one can define $T_{\mathbb{C}}:= \exp({\mathfrak{h}}_{\mathbb{C}}({\mathbf{A}})
)$. Note that $\exp({\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}}) =: A_{\mathbb{R}}\subsetneq T_{{\mathbb{R}}} := T_{\mathbb{C}}\cap {G}_{\mathbb{R}}({\mathbf{A}})$. In fact, $A_{\mathbb{R}}$ is of index $2$ in $T_{\mathbb{R}}$ and there is a unique Lie group topology on $T_{\mathbb{R}}$ in which $T_{\mathbb{R}}{\cong}(R^\times)^n$ and $A_{\mathbb{R}}= T_{\mathbb{R}}^\circ {\cong}({\mathbb{R}}_{>0})^n$. The centre of ${G}_{\mathbb{R}}({\mathbf{A}})$ is contained in $T_{\mathbb{R}}$.
The intersection $T:= G(\Pi) \cap T_{\mathbb{C}}$ is called the *standard split maximal torus* of $G(\Pi)$; again, $A_{\mathbb{R}}\cap T$ is of finite index in $T$ and $T$ contains the centre of $G(\Pi)$.
The Lie algebra $\mathfrak g_{\mathbb{R}}({\mathbf A})$ admits a unique involution $\theta$ which maps $e_j$ to $f_j$ for all $j=1, \dots, r$ and acts as $-1$ on $\mathfrak h_{\mathbb{R}}({\mathbf A})$. There exists a unique involutive automorphism $\theta: G_{\mathbb{R}}({\mathbf{A}}) \to G_{\mathbb{R}}({\mathbf{A}})$ such that $\theta(\exp(X)) = \exp(\theta(X))$ for all $X \in F_{\mathfrak g_{\mathbb{R}}({\mathbf A})}$, and this involutive automorphism is called the *Cartan–Chevalley involution* of $G_{\mathbb{R}}({\mathbf{A}})$. We denote by $K_{\mathbb{R}}({\mathbf{A}}) := G_{\mathbb{R}}({\mathbf{A}})^\theta \subset G_{\mathbb{R}}({\mathbf{A}})$ the fixed point subgroup of this involution and define $K(\Pi) := K_{\mathbb{R}}({\mathbf{A}}) \cap G(\Pi)$.
Let $\alpha \in \Phi$ be a real root. Then ${\mathfrak{g}}_\alpha^{\mathbb{R}}$ is one-dimensional and consists of ad-locally-finite elements. One can therefore define the *root group* $U_\alpha:= \exp({\mathfrak{g}}_\alpha^{\mathbb{R}}) \subseteq {G}_{\mathbb{R}}({\mathbf{A}})$. Each root group $U_\alpha$ carries a unique Lie group topology such that $U_\alpha {\cong}{\mathbb{R}}$ as topological groups.
Define the *positive*, respectively *negative maximal unipotent subgroup* $U^\pm$ of ${G}_{\mathbb{R}}({\mathbf{A}})$ as the group generated respectively by the positive and negative root groups. One has $U^\pm \subseteq G(\Pi)$. The groups $U^\pm$ are normalized by $T_{\mathbb{R}}$ and intersect $T_{\mathbb{R}}$ trivially. In particular, they intersect the centres of ${G}_{\mathbb{R}}({\mathbf{A}})$ and $G(\Pi)$ trivially and hence embed into both $\operatorname{Ad}({G}_{\mathbb{R}}({\mathbf{A}}))$ and $\operatorname{Ad}(G(\Pi))$.
If $\alpha \in \Phi^+$, then $-\alpha \in \Phi^-$ and the group $G_\alpha:= \langle U_\alpha, U_{-\alpha} \rangle \leq G(\Pi)$ is isomorphic to ${\mathrm{SL}}_2({\mathbb{R}})$. The groups $G_\alpha$ with $\alpha \in \Phi^+$ are called the *rank one subgroups* and the groups $G_1:=G_{\alpha_1}, \dots, G_n:=G_{\alpha_n}$ are called the *fundamental rank one subgroups* of $G(\Pi)$.
One can show that the pair $((U_\alpha)_{\alpha \in \Phi}, T)$ defines an RGD system for $G(\Pi)$. For details concerning RGD systems, we refer the reader to [@AB Chapter 8].
\[definition kp\] The *Kac–Peterson topology* on ${G}_{\mathbb{R}}({\mathbf{A}})$ equals the finest group topology on ${G}_{\mathbb{R}}({\mathbf{A}})$ such that the natural embeddings $(U_\alpha \hookrightarrow {G}_{\mathbb{R}}({\mathbf{A}}))_{\alpha \in \Phi}$ and $T_{\mathbb{R}}\hookrightarrow {G}_{\mathbb{R}}({\mathbf{A}})$ are continuous when $T_{\mathbb{R}}$ and the root groups $U_\alpha$ are endowed with their Lie group topologies.
The Kac–Peterson topology is $k_\omega$ by [@HKM Proposition 7.10] and, in particular, Hausdorff. Moreover, for every $\alpha \in \Phi^+$, it induces the unique connected Lie group topology on $G_\alpha$ and on $T_{\mathbb{R}}$ by [@HKM Corollary 7.16]
For more details on the Kac–Peterson topology, see [@HKM Chapter 7].
Throughout this paper, let $G:= G(\Pi)$ be the algebraically simply connected centered split real Kac–Moody group associated to an irreducible generalized Dynkin diagram $\Pi = (V,E)$ with (bijective) labelling $\lambda:\{1, \dots, n\} =: I \to V $. Let $K := K(\Pi)$ be the maximal compact subgroup of $G$, i.e., the subgroup fixed by the Cartan–Chevalley involution $\theta$.
The groups $G$ and $K$ are always endowed with the subspace topologies induced by the Kac–Peterson topology on $G_{\mathbb{R}}({\mathbf{A}})$ and $G/B$ with the quotient topology.
Denote by $B:= B_+$ the positive Borel subgroup of the twin $BN$-pair of $G$, by $T$ the standard split maximal torus and by $W$ the Weyl group of $G$ with generating set $S = \{\sigma_i\}_{i \in I}$. For each $\sigma_i \in S$, take $s_i \in G$ to be a fixed representative for $\sigma_i$. By [@DMGH Corollary 1.7], one has an Iwasawa decomposition $G = KB$.
Unless specified more explicitly, the symbol $J$ will always denote an arbitrary subset of the index set $I$, the symbol $\Pi_J$ the subdiagram of $\Pi$ corresponding to $J$, the symbol $G_J$ the subgroup $G(\Pi_J)$ of $G$, and the symbols $K_J$ and $B_J$ the intersections $G_J \cap K$ and $G_J \cap B$, respectively. This is consistent with the notation for the fundamental rank one subgroups: One has $G(\Pi_i) = G_i = G_{\alpha_i}$.
\[homeos alpha beta\] Due to the structure theory of RGD systems (cf. [@AB Chapter 8], most notably the fact that restricting an RGD system to a subdiagram again yields an RGD system), for each fundamental rank one subgroup $G_{i}$ there exists an (abstract) isomorphism $\gamma_{i}: {\mathrm{SL}}(2,{\mathbb{R}}) \to G_{i}$ with the following properties: Let $B_{{\mathrm{SL}}(2,{\mathbb{R}})}$ be the group of upper triangular matrices in ${\mathrm{SL}}(2,\mathbb{R})$ and let $U_{\pm \beta}$ denote the canonical root subgroups of ${\mathrm{SL}}(2,{\mathbb{R}})$. Then
- $\gamma_{i}(U_{\pm \beta}) = U_{\pm \alpha_{i}}$.
- $\gamma_{i}(B_{{\mathrm{SL}}(2,{\mathbb{R}})}) = B_{i}$.
- For each $x \in {\mathrm{SL}}(2,{\mathbb{R}})$, $\gamma_{i}((x^t)^{-1}) = (\gamma_{i}(x))^{\theta}$, and hence
- $\gamma_{i}({\mathrm{SO}}(2,{\mathbb{R}})) = K_i$.
By [@HKM Corollary 7.16], the restriction of the Kac–Peterson topology to any spherical subgroup $H$ of $G$ coincides with its Lie topology. That is, the groups $G_{i}$ inherit their Lie group topology from the topological Kac–Moody group $G$. By the classical theory of Lie groups this yields the existence of a diffeomorphism $\gamma_i$ with the desired properties; in particular, $\gamma_{i}$ is open.
\[basics\] Let $$\begin{aligned}
\delta: G/B \times G/B & \to & W \\ \delta(gB,hB) = w & \iff & g^{-1}h \in BwB\end{aligned}$$ be the Weyl distance function on $G/B$, and let $l_S$ be the length function that associates to each element the (unique) length of a corresponding reduced expression in $S$. Let $\leq$ be the strong Bruhat order on $W$: Recall that for $w_1, w_2 \in W$ one has $w_1 \leq w_2$ if there exist reduced expressions $s_{i_1}\dots s_{i_{l_S(w_1)}}$ of $w_1$ and $s_{j_1}\dots s_{j_{l_S(w_2)}}$ such that the former is a (not necessarily consecutive) substring of the latter.
For $w \in W$ and a chamber $gB \in G/B$ define $$C_w(gB) := \{hB \in G/B\mid \delta(gB, hB) = w\},$$ $$C_{\leq w}(gB) := \bigcup_{v \leq w}C_v(gB)$$ and $$C_{<w}(gB) :=C_{\leq w}(gB) \setminus C_w(gB).$$ In particular, one has $C_w(B)= BwB/B$ and $C_{\leq \sigma}(B) = B \langle s \rangle B/B$ for $\sigma \in S$ with representative $s \in \tilde{W}\subseteq G$ in the extended Weyl group $\tilde{W}$. A set $C_{\leq \sigma}(gB)$ is called a *$\sigma$-panel*.
Moreover, for a subset $\{\sigma_i\}_{i \in J} \subseteq S$ with representatives $\{s_i\}_{i \in J} \subseteq \tilde{W}$ define $P_J$ to be the standard parabolic subgroup corresponding to the index set $J$, that is, $P_J := B\langle \{s_i\}_{i \in J} \rangle B$.
Throughout this paper, $C_w(gB)$ and $C_{\leq w}(gB)$ will always be endowed with the subspace topologies induced by $G/B$.
\[P\_i = G\_iB\] Let $\sigma_i \neq \sigma_j \in S$. Then the following hold:
1. $P_i = G_i B$ $= K_i B$. In particular, $C_{\leq \sigma_i}(B) = K_iB/B$.
2. $Bs_i s_jB = Bs_iB Bs_jB$. In particular, $C_{\leq \sigma_i \sigma_j}(B) = K_iBK_jB/B$
Assertions (a) and (b) follow from [@AB Remark 8.51] and [@AB Remark (2) after Theorem 6.56], respectively, and the Iwasawa decomposition $G_i = K_iB_i$.
The fundamental group of G/PJ
=============================
Throughout this section, let $J \subseteq I$, let $W_J$ be the subgroup of $W$ generated by $\{\sigma_i\}_{i \in J}$, and let $W^J \subseteq W$ be a set of representatives of the cosets in $W/W_J$ that have minimal length. The space $G/P_J$ is called *generalized flag variety*.
\[bruhat\] One has $G/P_J = \bigsqcup_{w\in W^J}BwP_J/P_J$.
By [@AB Theorem 6.56, Remark (1)], the group $G$ has a Bruhat decomposition $$G = \bigsqcup_{w \in W} BwB = \bigcup_{w \in W} BwBW_JB = \bigcup_{w \in W} BwP_J.$$ Since double cosets partition $G$, one has $w_1 W_J = w_2 W_J$ if and only if $Bw_1 P_J = B w_2 P_J$ for $w_1, w_2 \in W$. This yields the desired disjoint decomposition of $G/P_J$.
\[quotient basics\]
Let $G$ be a topological group and $H_1 \leq H_2$ subgroups of $G$ and endow $G/H_i$ with the quotient topology. Then the following hold:
1. The projection map $\pi: G \to G/H_1$ is continuous and open.
2. The canonical map $\psi: G/H_1 \to G/H_2$ is continuous and open.
(a): Let $U \subseteq G$ open. Since $\pi$ is a quotient map, it suffices to show that $\pi^{-1}(\pi(U))$ is open. But this is true since $\pi^{-1}(\pi(U)) = UH_1 = \bigcup_{h \in H_1}Uh$ is a union of translates of open sets.
(b): This follows directly from (a) and the commutative diagram $$\begin{tikzcd}
G \arrow[rd, "{\varphi}"] \arrow[d, "\pi"] \\
G/H_1 \arrow[r, "\psi"] & G/H_2
\end{tikzcd}.\qedhere$$
\[definition psi\_w\] For $w \in W$, define the following restrictions of the canonical map $\psi: G/B \to G/P_J$:
- $\psi_w: BwB/B \to BwP_J/P_J$,
- $\psi_{\bar{w}}: \bigcup_{x \leq w} BxB/B \to \bigcup_{x \leq w} BxP_J/P_J$.
Since $\psi$ is continuous, the same holds for the two restrictions. The space $\bigcup_{x \leq w} BxB/B$ is compact by [@HKM Corollary 3.10] and so $\psi_{\bar{w}}$ is a quotient map.
\[hom between BwB/B and BwP/P\] Let $G$ be two-spherical or symmetrizable and let $w \in W^J$. Then the canonical map $\psi_w$ is a homeomorphism.
By Remark \[definition psi\_w\], $\psi_{\bar{w}}$ is a quotient map. One has $\psi_{\bar{w}}^{-1}(BwP_J/P_J) = BwB/B$: Let $x \leq w$ such that $BxP_J/P_J = BwP_J/P_J$. Then $x\in BwP_J = BwW_JB$ where the equality holds since by definition of $W^J$ one has $l(ww')= l(w)+l(w')$ for all $w' \in W_J$ which implies $Bww'B = BwBw'B$. The Bruhat decomposition of $G$ yields $x \in wW_J$ and hence, $l(x) \geq l(w)$. This implies $x = w$.
Now, since $BwB/B$ is open in its closure $\bigcup_{x \leq w} BxB/B$ in $G/B$ (see [@HKM Proposition 5.9] plus Corollary \[topstrong\]), the preceding observations yield that $\psi_w$ is an injective quotient map and therefore a homeomorphism.
\[panels are spheres\] Let $\sigma_i \in S$. Then the panels satisfy $C_{\leq \sigma_i}(B) {\simeq}S^1({\mathbb{R}})$.
The panel $C_{\leq \sigma_i}(B)$ is a subbuilding of $G/B$ corresponding to the RGD system $\{G_i, U_{\alpha_{i}}, U_{-\alpha_{i}}, T \cap G_i \}$. By Remark \[homeos alpha beta\] one has $G_i {\cong}{\mathrm{SL}}(2,{\mathbb{R}})$, $T \cap G_i {\cong}T_{{\mathrm{SL}}(2,{\mathbb{R}})}$ and $U_{\pm\alpha_{i}} {\cong}U_{\pm\alpha}$ where $T_{{\mathrm{SL}}(2,{\mathbb{R}})}$ denotes the subgroup of diagonal matrices and $U_{\pm\alpha}$ denote the canonical root subgroups of ${\mathrm{SL}}(2,{\mathbb{R}})$. This implies that $C_{\leq \sigma_i}(B)$ is homeomorphic to the building ${\mathrm{SL}}(2,{\mathbb{R}})/B_{{\mathrm{SL}}(2,{\mathbb{R}})} {\simeq}{\mathbb{P}}_1({\mathbb{R}}) {\simeq}S^1({\mathbb{R}})$.
Following [@Rot Chapter 8], a *CW complex* is an ordered triple $(X, E, \chi)$, where $X$ is a Hausdorff space, $E$ is a family of cells in $X$, and $\chi = \{\chi_e\mid e\in E\}$ is a family of maps, such that
1. $X = \bigsqcup_{e\in E}{E}$.
2. For $k \in {\mathbb{N}}$, let $X^{(k)}\subseteq X$ be the union of all cells of dimension $\leq k$. Then for each $(k+1)$-cell $e \in E$, the map $\chi_e:(D^{k+1}, S^{k}) \to (e\cup X^{(k)}, X^{(k)})$, is a *relative homeomorphism*, i.e., it is a continuous map and its restriction $D^k \setminus S^{k-1} \to e$ is a homeomorphism.
3. If $e \in E$, then its closure $\operatorname{cl}{e}$ is contained in a finite union of cells in $E$.
4. $X$ has the weak topology determined by $\{\operatorname{cl}{e} \mid e \in E\}$, i.e., a subset $A$ of $X$ is closed if and only if $A \cap \operatorname{cl}{e}$ is closed in $\operatorname{cl}{e}$ for each $e \in E$.
For $k \in {\mathbb{N}}$, let $\Lambda_k$ be an index set for the $k$-dimensional cells, so that $X^{(k)} \setminus X^{(k-1)} = \bigsqcup_{\lambda \in \Lambda_k} e_\lambda$ and set $\chi_\lambda:= \chi_{e_\lambda}$. This map is called the *characteristic map* of $e_\lambda$.
\[bruhat decomp is cw decomp\] Let $G$ be two-spherical or symmetrizable. Then for each $w \in W$, the set $C_w(B) = BwB/B$ is a cell of dimension $l(w)$ that is open in its compact closure $C_{\leq w}(B)$ in $G/B$. For each subset $J \subseteq I$, the Bruhat decomposition $G/P_J = \bigsqcup_{w\in W^J}BwP_J/P_J$ is a CW decomposition.
The first statement is immediate by [@HKM Corollary 3.10 and Proposition 5.9] plus Corollary \[topstrong\], see also [@Kra p. 170, 171]. Furthermore, [@HKM Proposition 5.9] combined with Corollary \[topstrong\] states that the Bruhat decomposition of $G/B$ is a CW decomposition. By Lemma \[hom between BwB/B and BwP/P\], $G/P_J$ is composed of cells that are homeomorphic to cells in $G/B$, so composing the characteristic maps of the latter cells with the canonical map $\psi: G/B \to G/P_J$ yields characteristic maps for the cells in $G/P_J$.
For the closure-finiteness, let $BwP_J/P_J$ be a cell in $G/P_J$. Since $\psi$ is continuous and restricts to a homeomorphism $BwB/B \to BwP_J/P_J$, it maps $\operatorname{cl}{BwB/B}$ surjectively onto $\operatorname{cl}{BwP_J/P_J}$. Now, $\operatorname{cl}BwB/B = \bigcup_{x \leq w} BxB/B$, which implies that $$\operatorname{cl}{BwP_J/P_J} = \bigcup_{x \leq w} BxP_J/P_J = \bigcup_{\substack{x\leq w \\ x \in W^J}} BxP_J/P_J,$$ where the last equality holds since $W_J \subseteq P_J$. This proves that $\operatorname{cl}{BwP_J/P_J}$ is contained in a finite union of cells.
It remains to show that $G/P_J$ has the weak topology determined by the cell closures.
For $w\in W$ and a minimal-length representative $\tilde{w}\in W^J$ of $wW_J$, one has $BwP_J/P_J = B\tilde{w}P_J/P_J$. Let $e_w:= BwP_J/P_J = B\tilde{w}P_J/P_J$ and $e_w':= BwB/B$. Let $\bar{e}_w = \operatorname{cl}{e_w} = \bigcup_{x \leq \tilde{w}} BxP_J/P_J$ and $\bar{e}_w':= \operatorname{cl}{e'_w} = \bigcup_{x \leq w} BxB/B$.
Let $A$ be a closed subset of $G/P_J$ and let $e_w$, $w \in W^J$, be an arbitrary cell. Then $\psi^{-1}(A)$ is closed in $G/B$ since $\psi$ is continuous, so $\psi^{-1}(A) \cap \bar{e}_w'$ is closed in $\bar{e}_w'$ since $G/B$ is a CW complex. Now, $$\psi^{-1}(A) \cap \bar{e}_w' = \psi^{-1}(A) \cap \psi^{-1}(\bar{e}_w) = \psi^{-1}(A \cap \bar{e}_w) = \psi_{\bar{w}}^{-1}(A \cap \bar{e}).$$ Since $\psi_{\bar{w}}$ is a quotient map by Remark \[definition psi\_w\], this implies that $A \cap \bar{e}_w$ is closed in $\bar{e}_w$.
Now, let $A$ be a subset of $G/P_J$ such that $A \cap \bar{e}_w$ is closed in $\bar{e}_w$ for all $w \in W^J$. Since for each $w \in W$ one has $e_w = e_{\tilde{w}}$ for any minimal-length representative $\tilde{w} \in W^J$ of $wW_J$, in fact $A \cap \bar{e}_w$ is closed in $\bar{e}_w$ for all $w \in W$. Therefore $\psi_{\bar{w}}^{-1}(A \cap \bar{e}_w)$ is closed in $\bar{e}_w'$ for all $w \in W$. Since $\psi_{\bar{w}}^{-1}(A \cap \bar{e}_w) = \psi^{-1}(A) \cap \bar{e}_w'$, the fact that $G/B$ is a CW complex implies that $\psi^{-1}(A)$ is closed in $G/B$. Since $\psi$ is open by Lemma \[quotient basics\], it follows that $A$ is closed in $G/P_J$. This proves that $G/P_J$ is a CW complex.
Define $R: [0,1] \to {\mathrm{SO}}(2,{\mathbb{R}}), s \mapsto \begin{pmatrix} \cos(s\pi) & -\sin(s\pi) \\ \sin(s\pi) & \cos(s\pi) \end{pmatrix}.$
\[the map R\] $R$ induces a continuous, surjective map $\tilde{R}: [0,1] \to {\mathrm{SL}}(2,{\mathbb{R}})/B_{{\mathrm{SL}}(2,{\mathbb{R}})}$ which maps the interior $(0, 1)$ homeomorphically onto its image and maps the boundary $\{0, 1\}$ surjectively onto its image.
Let $\{x_0\}:= \left\langle \begin{pmatrix} 1 & 0 \end{pmatrix}^\intercal \right\rangle \in {\mathbb{P}}^1$ where ${\mathbb{P}}^1$ denotes the real projective line, modelled as the subset of one-dimensional subspaces of ${\mathbb{R}}^2$. Since each one-dimensional subspace in ${\mathbb{P}}^1 \setminus \{x_0\}$ contains exactly one element in the upper half circle $R([0,1]) \cdot \begin{pmatrix} 1 & 0 \end{pmatrix}^\intercal$ while $x_0$ contains the two boundary points corresponding to $R(0)$ and $R(1)$, one has a surjection from $[0,1]$ onto ${\mathbb{P}}^1$ given by $t \mapsto \left\langle R(t) \cdot \begin{pmatrix} 1 & 0 \end{pmatrix}^\intercal \right\rangle$ which maps $(0,1)$ bijectively onto ${\mathbb{P}}^1 \setminus \{x_0\}$. Since ${\mathrm{SL}}(2,{\mathbb{R}})$ acts transitively on the real projective line ${\mathbb{P}}^1$ with $B_{{\mathrm{SL}}(2,{\mathbb{R}})}$ being the stabilizer of $x_0:= \left\langle \begin{pmatrix} 1 & 0 \end{pmatrix}^\intercal \right\rangle$, one has a bijective correspondence $gB \mapsto g x_0$ between ${\mathrm{SL}}(2,{\mathbb{R}})/B_{{\mathrm{SL}}(2,{\mathbb{R}})}$ and ${\mathbb{P}}^1$. This yields the desired surjectivity and bijectivity properties of $\tilde{R}$. Continuity is clear, as well as the fact that the restriction to the interior is a homeomorphism.
The following Lemma is a consequence of [@Mas Ch. 7, Thm 2.1].
\[presentation cw\] Let $X$ be a $CW$ complex with only one $0$-cell $x_0$. For each $\lambda \in \Lambda_2$, let $f_\lambda:[0,1] \to S^1$ be a loop whose homotopy class generates $\pi_1(S^1)$ and whose image $\gamma_\lambda:= \chi_\lambda \circ f_\lambda$ under $\chi_\lambda$ is a loop in $X^{(1)}$ starting at $x_0$. Then $$\left \langle [\chi_\mu], \quad \mu \in \Lambda_1 \mid [\gamma_\lambda], \quad \lambda \in \Lambda_2 \right \rangle$$ is a presentation of $\pi_1(X,x_0)$, where the brackets denote the respective homotopy classes in $X^{(1)}$.
Let $D^1 = [0,1]$ be the one-dimensional unit disc and note that $D^2 {\simeq}D^1 \times D^1$. For $i,j \in I$ let $\gamma_i, \gamma_j$ be as in Remark \[homeos alpha beta\]. Let $p:G \to G/B$ be the canonical projection. Define $\chi_i: D^1 \to G/B$ and $\chi_{(i,j)}: D^1 \times D^1 \to G/B$ by
- $\chi_i(s) := p(\gamma_i(R(s))) = \gamma_i(R(s))\cdot B$,
- $\chi_{(i,j)}(s,t):= p(\gamma_i(R(s))\gamma_j(R(t))) = \gamma_i(R(s))\gamma_j(R(t))\cdot B$.
The following Lemma was inspired by [@Pro Ch. 10, second Proposition of 6.8], see also [@Kac §2.6, p.198].
\[char maps\] Let $G$ be two-spherical or symmetrizable. Then the maps defined above are characteristic maps for the following cells:
1. $\chi_i$ for $C_{{\sigma}_i}(B) = B {s}_i B /B $,
2. $\chi_{(i,j)}$ for $C_{{\sigma}_i {\sigma}_j}(B) = B {s}_i {s}_j B /B$.
(a): One has to show that $\chi_i([0,1]) \subseteq C_{\leq \sigma_i}(B)$ and that $\chi_i$ is a continuous map which maps $(0,1)$ homeomorphically to $C_{\sigma_i}(B)$. The first assertion is clear, since by Lemma \[P\_i = G\_iB\] one has $C_{\leq \sigma_i} = G_iB/B$.
By Lemma \[P\_i = G\_iB\], one has $C_{\sigma_i}(B) = \{kB \mid k \in K_i \setminus (K_i \cap B)\}$. Let $k \in K_i \setminus (K_i \cap B)$. Then $\gamma_i^{-1}(p^{-1} (kB)) = \gamma_i^{-1}(k)\cdot B_{{\mathrm{SL}}(2,{\mathbb{R}})} \in {\mathrm{SL}}(2,{\mathbb{R}})/B_{{\mathrm{SL}}(2,{\mathbb{R}})} \setminus B_{{\mathrm{SL}}(2,{\mathbb{R}})}$. By Lemma \[the map R\], there exists a unique $s \in (0,1)$ satisfying $R(s)B_{{\mathrm{SL}}(2,{\mathbb{R}})} = \gamma_i^{-1}(k) B_{{\mathrm{SL}}(2,{\mathbb{R}})}$. Hence, $s$ is the unique preimage of $kB$ under $\chi_i$. This yields the desired bijectivity property. The continuity properties are clear.
(b): Since by Lemma \[P\_i = G\_iB\] (c) one has $C_{\leq \sigma_i \sigma_j}(B) = K_iBK_jB/B$, it is clear that $\chi_{(i,j)}([0,1]\times [0,1]) \subseteq C_{\leq {\sigma}_i {\sigma}_j}(B)$. For the injectivity of the restriction, let $(s,t), (\tilde{s}, \tilde{t}) \in (0,1)^2$ such that $\chi_{(i,j)}(s,t) = \chi_{(i,j)}(\tilde{s}, \tilde{t})$. Then $$\begin{aligned}
\gamma_i(R(s)) \gamma_j(R(t))B & = \gamma_i(R(\tilde{s})) \gamma_j(R(\tilde{t}))B\\
\iff (\gamma_i(R(\tilde{s})))^{-1} \gamma_i(R(s)) \gamma_j(R(t))B & = \gamma_j(R(\tilde{t}))B \in C_{\sigma_j}(B). \end{aligned}$$ This implies $R(\tilde{s})^{-1} R(s) \in B_{{\mathrm{SO}}(2, {\mathbb{R}})}$, since otherwise the left expression is in $C_{\sigma_i\sigma_j}(B)$, contradicting $C_{\sigma_i \sigma_j}(B) \cap C_{\sigma_j}(B) = \emptyset$. Since $s, \tilde{s} \in (0,1)$, one obtains $\tilde{s} = s$. It follows that $\chi_j(t) = \chi_j(\tilde{t})$, hence $t = \tilde{t}$ by (a).
For the surjectivity, note that by Lemma \[P\_i = G\_iB\] (c), one has $C_{\sigma_i\sigma_j}(B) = Bs_is_jB/B = Bs_iBBs_jB/B$. Let $x_i x_jB$ be an arbitrary element of $C_{\sigma_i\sigma_j}(B)$ with $x_i = b_1 s_i b_2 \in B s_i B$ and $ x_j \in B s_j B$. By (a), there exists an $s \in (0,1)$ with $\gamma_i(R(s))B = b_1 s_iB \in C_{\sigma_i}(B)$. Hence, there exists a $b \in B$ with $(\gamma_i(R(s))b = b_1 s_i b_2 = x_i$. Again by (a), there exists a $t \in (0,1)$ with $\gamma_j(R(t))B = bx_jB \in C_{\sigma_j}(B)$. This yields $$\begin{aligned}
\chi_{i,j}(s,t) & = \gamma_i(R(s)) \cdot \gamma_j(R(t)) B\\
& = x_i b^{-1} \cdot bx_j B\\
=x_i x_j B.\end{aligned}$$ This proves that $\chi_{i,j}$ maps $(0,1) \times (0,1)$ bijectively to $C_{\sigma_i\sigma_j}(B)$ The continuity properties are clear.
\[epsilon\] For $i,j \in I$, let ${\varepsilon}(i,j):= (-1)^{\langle\check{\alpha}_i, \alpha_j\rangle},$ where $\langle\check{\alpha}_i, \alpha_j\rangle$ is the $(i,j)$-entry of the Cartan matrix of $\Pi$.
\[vertauschen\] Let $e_i:= \gamma_i(-I) \in G_i$ with $\gamma_i$ as in Remark \[homeos alpha beta\] and $k_j \in K_j$. Then $e_i k_j e_i = k_j^{{\varepsilon}(i,j)}$.
\[fundamental group\] If the Bruhat decomposition satisfies the conclusion of Proposition \[bruhat decomp is cw decomp\], then a presentation of $\pi_1(G/P_J)$ is given by
$$\left\langle x_i; \quad i \in I \mid x_ix_j^{{\varepsilon}(i,j) } = x_jx_i,\quad x_k = 1; \quad i,j \in I, k \in J \right\rangle.$$ In particular, this statement holds in the two-spherical and the symmetrizable case.
By Lemma \[hom between BwB/B and BwP/P\] and Proposition \[bruhat decomp is cw decomp\], the Bruhat decomposition $G/P_J = \bigsqcup_{w\in W^J}BwP_J/P_J$ is a $CW$ decomposition where each cell $BwP_J/P_J$ has dimension $l(w)$. For each 1-cell $Bs_iP_J/P_J$ and 2-cell $Bs_is_jP_J/P_J$, the compositions $\tilde{\chi}_i := \psi_{s_i} \circ \chi_i$ and $\tilde{\chi}_{(i,j)}:= \psi_{s_is_j} \circ \chi_{(i,j)}$ are, respectively, characteristic maps ($\psi_{s_i}$ and $\psi_{s_is_j}$ denoting the canonical homeomorphisms from Lemma \[hom between BwB/B and BwP/P\]).
Lemma \[presentation cw\] gives a presentation of $\pi_1(G/P_J)$. The generating elements are given by the homotopy classes $x_i:=[\tilde{\chi}_i]$ of the characteristic maps of the 1-cells – namely, the cells $B s_i P_J/P_J$ where $i \in I \setminus J$. For the homotopy classes $x_k$ with $k \in J$, note that $\gamma_k(R(t)) \in G_k \subseteq P_J$, and so $\tilde{\chi}_k(t) = \gamma_k(r(t)) \cdot P_J = P_J$ which implies $x_k = [\tilde{\chi}_k] = 1_{\pi_1(G/P_J)}$. This yields the desired generating set as well as the trivial relation $x_k = 1$ for $i \in J$. To obtain the set of relators, for $k = 1, \dots, 4$ let ${\varphi}_k: [0,1] \to [0,1] \times [0,1]$ where $$\begin{aligned}
{\varphi}_1(t) &= (t,0),\\
{\varphi}_2(t) &= (1,t),\\
{\varphi}_3(t) &= (1-t,1),\\
{\varphi}_4(t) &= (0,1-t).\end{aligned}$$ Then the concatenation ${\varphi}:= {\varphi}_1 * {\varphi}_2 * {\varphi}_3 * {\varphi}_4$ is a loop in the relative boundary $\partial([0,1] \times [0,1]) \simeq S^1$ which generates its fundamental group. Moreover, for each characteristic map $\tilde{\chi}_{(i,j)}$ of a 2-cell, one has $\tilde{\chi}_{(i,j)}({\varphi}(0)) = \tilde{\chi}_{(i,j)}((0,0)) = \psi_{s_is_j}({\chi}_{(i,j)}(0,0)) = \psi_{s_is_j}(B) = P_J$ where $P_J$ is the unique 0-cell of the CW complex. Therefore, Lemma \[presentation cw\] implies that the set of relators is given by $\{[\tilde{\chi}_{(i,j)} \circ {\varphi}] \mid \sigma_i\sigma_j \in W^J, l(\sigma_i\sigma_j) = 2\}$. Now, $$\begin{aligned}
[\tilde{\chi}_{(i,j)} \circ {\varphi}] & = [\tilde{\chi}_{(i,j)} \circ {\varphi}_1] \cdot [\tilde{\chi}_{(i,j)} \circ {\varphi}_2] \cdot [\tilde{\chi}_{(i,j)} \circ {\varphi}_3]\cdot [\tilde{\chi}_{(i,j)} \circ {\varphi}_4], \end{aligned}$$ where
$\tilde{\chi}_{(i,j)} (s,t) = \alpha_i(R(s))\alpha_j(R(t))\cdot P_J$ with $R(0) = I_{{\mathrm{SO}}(2,{\mathbb{R}})}, R(1) = -I_{{\mathrm{SO}}(2,{\mathbb{R}})} \in B_{{\mathrm{SO}}(2,{\mathbb{R}})}$ which implies $$\begin{aligned}
[\tilde{\chi}_{(i,j)} \circ {\varphi}_1] &= x_i, \\
[\tilde{\chi}_{(i,j)} \circ {\varphi}_3] &= x_i^{-1},\\
[\tilde{\chi}_{(i,j)} \circ {\varphi}_4] &= x_j^{-1}. \end{aligned}$$
Moreover, $$\begin{aligned}
(\tilde{\chi}_{(i,j)} \circ {\varphi}_2)(t) &= \alpha_i(-I)\alpha_j(R(t))\cdot P_J\\
& = \alpha_i(-I)\alpha_j(R(t)) \alpha_i(-I) \cdot P_J, \quad \text{ since }\alpha_i(-I)\in P_J\\
& = \alpha_j(R(t))^{{\varepsilon}(i,j)}\cdot P_J \quad \text{ by Lemma \ref{vertauschen}}.\end{aligned}$$ Since $R(t)^{-1} = R(1-t)$, this yields $[\tilde{\chi}_{(i,j)} \circ {\varphi}_2] = x_j^{{\varepsilon}(i,j)}$. One therefore obtains $[\tilde{\chi}_{(i,j)} \circ {\varphi}] = x_i \cdot x_j^{{\varepsilon}(i,j)} \cdot x_i^{-1} \cdot x_j^{-1}$. This proves the assertion.
\[cardinality fundamental group\] Let $\Pi$ be simply-laced and $J \neq \emptyset$. Then $\pi_1(G/P_J) {\cong}{C_2}^{n-|J|}$.
For each generator $x_h$ in the presentation of Theorem \[fundamental group\], one has $x_h^2 = 1$: Recall that $\lambda$ denotes the labelling map $I \to V$ of the vertex set of $\Pi$. Since $\Pi$ is connected, one has a minimal path $(i_1, \dots, i_m = h)^\lambda$ in $\Pi$ such that $i_1 \in J$. If $m = 1$, one has $x_h = 1$ by the presentation above. Let $x_{i_1}, \dots, x_{i_{m-1}}$ have order $\leq 2$. Since $\Pi$ is simply-laced, ${\varepsilon}(m-1,h) = -1 = {\varepsilon}(h,m-1)$ which implies $x_h x_{i_{m-1}}^{-1} x_h^{-1} x_{i_{m-1}}^{-1} = 1$ and $x_{i_{m-1}} x_h^{-1} x_{i_{m-1}}^{-1} x_h^{-1} = 1$. Multiplying these expressions yields $x_h^2 = 1$.
Since each generator has order $\leq 2$, the relations show that the group is abelian. One concludes that $\pi_1(G/P_J) \cong {C_2}^{n-|J|}$.
The fundamental groups of K(Pi) and Spin(Pi,kappa) {#spin_simply_con}
==================================================
\[homeo K/(K cap P\_J) to G/P\_J\] The canonical map $\psi: K/(K \cap P_J) \to G/P_J$ is a homeomorphism.
Bijectivity follows from the product formula for subgroups since $G = KP_J$. By Lemma \[quotient basics\], the map $\tilde{\psi}: G/(K \cap P_J) \to G/P_J$ is continuous, so the same holds for its bijective restriction $\psi: K/(K \cap P_J) \to G/P_J$.
In order to show that $\psi$ is closed, let $P:= P_J$ and let $\tilde{P} := P_J \cap K$. Consider the commutative diagram $$\begin{tikzcd}
K/\tilde{P} \arrow[rd, "\psi"] \arrow[d, "\iota"] \\
G/\tilde{P} \arrow[r, "{\varphi}"] & G/P
\end{tikzcd}$$ where $\iota$ denotes the canonical embedding and ${\varphi}$ denotes the canonical map from $G/\tilde{P}$ to $G/P$. Since $K$ is closed in $G$ by [@FHHK Section 3F], the map $\iota$ is closed. By Lemma \[quotient basics\], ${\varphi}$ is open.
Let $X\tilde{P} \subseteq K/\tilde{P}$ be a closed subset of $K/\tilde{P}$ and suppose that $\psi(X\tilde{P}) = X P$ is not closed in $G/P$. Then the complement ${\mathsf{C}}_{G/P}(XP)$ is not open in $G/P$, hence the complement ${\mathsf{C}}_{G/\tilde{P}}({\varphi}^{-1} (XP)) = {\varphi}^{-1} ({\mathsf{C}}_{G/P}(XP))$ is not open in $G/\tilde{P}$. Therefore, ${\varphi}^{-1} (XP)$ is not closed in $G/\tilde{P}$. This yields that $X\tilde{P} = \psi^{-1}(XP) = \iota^{-1}({\varphi}^{-1}(XP))$ is not closed in $K/\tilde{P}$, a contradiction.
\[homeo G/P\_J to K/(K cap T) K\_J\] There exists a homeomorphism $G/P_J \to K/(K \cap T) K_J$.
Since $P_J = G_JB$ and $\theta(P_J) \cap P_J = G_J T$, one has $P_J \cap K = K_J (K \cap T)$. Furthermore, $G_J$ is normal in $G_J T$ which implies $K_J(K \cap T) = (K \cap T) K_J$. The claim now follows from Lemma \[homeo K/(K cap P\_J) to G/P\_J\].
\[covering basics\] Let ${\varphi}: X \to Y$ be a continuous, open, surjective map between Hausdorff topological spaces. If all fibers are finite and of constant cardinality, then ${\varphi}$ is a covering map.
Let $y \in Y$ and let ${\varphi}^{-1}(y) = \{x_1, \dots, x_k\} \subseteq X$. Since $X$ is Hausdorff, for $i = 1, \dots, k$ there exist neighborhoods $U_i$ of $x_i$ with $\bigcap_{i = 1}^k U_i = \emptyset$. Let $V:= \bigcap_{i = 1}^k {\varphi}(U_i)$. Then $V$ is open since ${\varphi}$ is open and $V \neq \emptyset$ since $y \in V$. The preimage ${\varphi}^{-1}(V)$ is a disjoint union of open sets $\tilde{U}_i:= {\varphi}^{-1}(V) \cap U_i$ and each $\tilde{U}_i$ is mapped bijectively to $V$: Surjectivity is clear; for the injectivity let $y' \in V$. Then each $\tilde{U}_i$ contains a preimage of $y'$. Since all fibers have constant cardinality $k$, it follows that $|\tilde{U}_i \cap {\varphi}^{-1}(y')| = 1$. This proves the assertion.
\[degree of K to K/T covering\] The canonical map $\psi: K/K_J \to K/(K \cap T)K_J$ is a covering map of degree $2^{n-|J|}$.
By Lemma \[quotient basics\], $\psi$ is continuous, open and surjective.
By [@FHHK Lemma 3.20 and the discussion after Prop 3.8], the group $\tilde{T}:=(K \cap T)$ has order $2^n$. Note that one has $T_J \cap T_{I\setminus J} = \{1\}$, since the Kac–Moody group $G$ being algebraically simply connected implies $T {\cong}T_J \times T_{I \setminus J}$. Now, for $k \in K$ one has $\psi^{-1} (k\tilde{T}K_J) = \{ktK_J \mid t \in \tilde{T}\}$, and since $T_J \cap T_{I\setminus J} = \{1\}$, one has $k t_i K_J \neq k t_j K_J$ for $t_i \neq t_j \in T \cap K_{I\setminus J}$. This yields $|\psi^{-1} (k\tilde{T}K_J)| = |\{ktK_J \mid t \in \tilde{T}\}| = |\{ktK_J \mid t \in T \cap K_{I\setminus J}\}| = | T \cap K_{I\setminus J} | = | T_{I\setminus J} \cap K_{I\setminus J} |= 2^{n-|J|}$. Lemma \[covering basics\] now shows that $\psi$ is a covering map.
\[general case\]
Let ${\Pi^{\mathrm{adm}}}$ be the graph on the vertex set $V$ with edge set $$\{ \{i,j\} \in V \times V \mid i \neq j \in I, {\varepsilon}(i,j) = {\varepsilon}(j,i) = -1 \},$$ where ${\varepsilon}(i,j)$ denotes the parity of the corresponding Cartan matrix entry, as defined in Notation \[epsilon\].
An *admissible colouring* of $\Pi$ is a map $\kappa: V \to \{1,2\}$ such that
1. $\kappa(i^\lambda) = 1$ whenever there exists $j \in I \setminus\{i\}$ with ${\varepsilon}(i,j) = 1$ and $ {\varepsilon}(j,i) = -1$.
2. the restriction of $\kappa$ to any connected component of the graph ${\Pi^{\mathrm{adm}}}$ is a constant map.
Define $c(\Pi,\kappa)$ to be the number of connected components of ${\Pi^{\mathrm{adm}}}$ on which $\kappa$ takes the value 2. For a subgraph ${\Pi^{\mathrm{adm}}}_J$ of ${\Pi^{\mathrm{adm}}}$ that is a union of connected components of ${\Pi^{\mathrm{adm}}}$ let $\kappa_J$ be the corresponding restriction of $\kappa$.
\[colouring\] Let the colouring $\gamma: V \to \{r,g,b\}$ of ${\Pi^{\mathrm{adm}}}$ be defined as follows:
1. $\gamma(i^\lambda) = r$ whenever there exists $j \in I \setminus\{i\}$ with ${\varepsilon}(i,j) = 1$ and $ {\varepsilon}(j,i) = -1$.
2. $\gamma(i^\lambda) =g$ whenever for each $j \in I \setminus\{i\}$, one has $({\varepsilon}(i,j), {\varepsilon}(j,i)) \in \{(1,1), (-1,1)\}$.
3. $\gamma(i^\lambda) = b$ whenever the connected component of $i^\lambda$ in ${\Pi^{\mathrm{adm}}}$ contains more than one vertex and case (a) applies to none of the vertices in this component.
4. The restriction of $\gamma$ to any connected component of the graph ${\Pi^{\mathrm{adm}}}$ is a constant map.
We refer to the introduction for a discussion of various examples.
\[Spin remark\] Recall from the introduction that in [@GHKW Definition 16.16], the *spin group ${\mathrm{Spin}}(\Pi,\kappa)$ with respect to $\Pi$ and $\kappa$* is defined as the universal enveloping group of a particular ${\mathrm{Spin}}(2)$-amalgam $\{\tilde{G}_{ij}, \tilde{\oldphi}_{ij}^i\mid i \neq j \in I \}$ where the isomorphism type of $\tilde{G}_{ij}$ depends on the $(i,j)$- and $(j,i)$-entries of the Cartan matrix of $\Pi$ as well as the values of $\kappa$ on the corresponding vertices. The group $K(\Pi)$ can be regarded as (being uniquely isomorphic to) the universal enveloping group of an ${\mathrm{SO}}(2,{\mathbb{R}})$-amalgam $\{G_{ij}, \oldphi_{ij}^i\mid i \neq j \in I \}$ where each $\tilde{G}_{ij}$ covers $G_{ij}$ via an epimorphism $\alpha_{ij}$. By [@GHKW Lemma 16.18] there exists a canonical central extension $\rho_{\Pi,\kappa}: {\mathrm{Spin}}(\Pi, \kappa) \to K(\Pi)$ that makes the following diagram commute for all $i \neq j \in I$: $$\begin{tikzcd}
\tilde{G}_{ij} \arrow[r, "\tilde{\tau}_{ij}"] \arrow[d, " \alpha_{ij}"] & {\mathrm{Spin}}(\Pi,\kappa) \arrow[d, "\rho_{\Pi,\kappa}"] \\
G_{ij} \arrow[r, "{\tau}_{ij}"] & K(\Pi)
\end{tikzcd}$$ Here, $\tilde{\tau}_{ij}$ and ${\tau}_{ij}$ denote the respective canonical maps into the universal enveloping groups.
By [@GHKW Proposition 3.9], one has $$\ker(\rho_{\Pi,\kappa}) = \langle \tilde{\tau}_{ij}(\ker(\alpha_{ij})) \mid i \neq j \in I \rangle_{{\mathrm{Spin}}(\Pi,\kappa)}.$$
Each connected component of ${\Pi^{\mathrm{adm}}}$ that admits a vertex $i^\lambda$ with $\kappa(i^\lambda) = 2$ contributes a factor $2$ to the order of $\ker(\rho_{\Pi,\kappa})$ so that ${\mathrm{Spin}}(\Pi,\kappa)$ is a $2^{c(\Pi,\kappa)}$-fold central extension of $K(\Pi)$.
In particular, this implies that the subspace topology on $K(\Pi)$ defines a unique topology on ${\mathrm{Spin}}(\Pi)$ that turns the extension into a covering map. The resulting group topology on ${\mathrm{Spin}}(\Pi, \kappa)$ is called the [*Kac–Peterson topology*]{} on ${\mathrm{Spin}}(\Pi, \kappa)$.
In the case of a simply-laced diagram $\Pi$, the only admissible colourings are the trivial colouring and the constant colouring $\kappa: V \to \{2\}$ and we define the *spin group ${\mathrm{Spin}}(\Pi)$ with respect to $\Pi$* as ${\mathrm{Spin}}(\Pi) := {\mathrm{Spin}}(\Pi, \kappa)$.
Before turning to the general case, we will first consider the simply laced case and formulate and prove the corresponding simplified versions of the main theorems.
\[homeo Spin(Pi)/Spin(Pi\_[ij]{}) and K/K\_[ij]{}\] Let $\Pi$ be simply laced and let $\{i,j\} \subseteq I$ be the index set of an $A_2$-subdiagram of $\Pi$. Then the spaces ${\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})$ and $K/K_{ij}$ are homeomorphic.
From [@GHKW] (exact references below) it follows that the kernel of the covering map ${\mathrm{Spin}}(\Pi) \to K$ coincides with the kernel of the covering map ${\mathrm{Spin}}(\Pi_{ij}) \to K_{ij}$ and is equal to the group $Z:= \{\pm 1_{{\mathrm{Spin}}(\Pi)}\}$ (for the definition of $-1_{{\mathrm{Spin}}(\Pi)}$, see below). This is a consequence of the following facts regarding an irreducible simply-laced diagram $\Pi$ (all referring to [@GHKW]):
- There is an epimorphism ${\mathrm{Spin}}(2) \to {\mathrm{SO}}(2,{\mathbb{R}})$ with kernel $\{\pm 1_{{\mathrm{Spin}}(2)}\}$ (see \[Theorem 6.8\]).
- In ${\mathrm{Spin}}(\Pi)$, all elements $\tilde{\tau}_{ij}(\tilde{\oldphi}_{ij}^i(-1_{{\mathrm{Spin}}(2)}))$ coincide (see \[Lemma 11.7\]).
- Let $-1_{{\mathrm{Spin}}(\Pi)}:= \tilde{\tau}_{ij}(\tilde{\oldphi}_{ij}^i(-1_{{\mathrm{Spin}}(2)}))$ for an arbitrary pair $i \neq j \in I$. Then $1_{{\mathrm{Spin}}(\Pi)} \neq -1_{{\mathrm{Spin}}(\Pi)}$ (see \[Corollary 11.16\]).
- ${\mathrm{Spin}}(\Pi)$ is a 2-fold central extension of $K(\Pi)$ (see \[Theorem 11.17\]).
Hence, the 2-fold covering map $\tilde{{\varphi}}: {\mathrm{Spin}}(\Pi) \to K(\Pi)$ induces a continuous bijective map ${\varphi}: {\mathrm{Spin}}(\Pi) / {\mathrm{Spin}}(\Pi_{ij}) \to ({\mathrm{Spin}}(\Pi) / Z) / ({\mathrm{Spin}}(\Pi_{ij}) / Z) \to K / K_{ij}$. One has a commutative diagram $$\begin{tikzcd}
{\mathrm{Spin}}(\Pi) \arrow[r, "\tilde{{\varphi}}"] \arrow[d, "\pi_1"] & K \arrow[d, "\pi_2"] \\
{\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij}) \arrow[r, "{\varphi}"] & K/K_{ij}
\end{tikzcd}.$$ Since $\tilde{{\varphi}}$ is open as a covering map and $\pi_2$ is open by Lemma \[quotient basics\], it follows that ${\varphi}$ is a homeomorphism.
\[K/K\_J is simply connected.\] Let $\Pi$ be simply laced. Then $K/K_J$ is simply connected.
$K/K_J$ is connected since $K$ is generated by connected groups isomorphic to ${\mathrm{SO}}(2,{\mathbb{R}})$. Hence by Lemma \[degree of K to K/T covering\] it is a non-trivial cover of $K/(K \cap T)K_J$ of degree $2^{n-|J|}$. The claim now follows from Corollary \[cardinality fundamental group\] and Corollary \[homeo G/P\_J to K/(K cap T) K\_J\].
The following proposition provides our main result in the simply laced case.
\[simplyconnected\] Let $\Pi$ be irreducible and simply laced. Then ${\mathrm{Spin}}(\Pi)$ is simply connected with respect to the Kac–Peterson topology. In particular, $\pi_1(G) {\cong}C_2$.
By [@Hus 4.2.4], for a closed subgroup $H$ of a topological group $G$, the projection $p: G \to G/H$ is a principal $H$-bundle. By Lemma \[Palais\], this bundle is locally trivial if $H$ is a (closed) Lie group (note that, by [@HR Theorem 5.11], every locally compact subgroup of a topological group is closed). Since locally trivial bundles admit local cross sections, [@Ste1 Corollary in Section 7.4] implies that, if $H$ is a closed Lie group, then $p: G \to G/H$ is a fibre bundle with fibre $H$. This yields a [[locally trivial]{}]{} fibre bundle $$\begin{tikzcd}
{\mathrm{Spin}}(\Pi_{ij}) \arrow[r] & {\mathrm{Spin}}(\Pi) \arrow[r] & {\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij}).
\end{tikzcd}$$ By [@Hatcher Chapter 4], this yields the homotopy long exact sequence $$\begin{aligned}
\pi_4({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})) & \to & \pi_3({\mathrm{Spin}}(\Pi_{ij})) \to \pi_3({\mathrm{Spin}}(\Pi)) \to
\pi_3({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})) \notag \\ & \to & \pi_2({\mathrm{Spin}}(\Pi_{ij})) \to \pi_2({\mathrm{Spin}}(\Pi)) \to
\pi_2({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})) \notag \\ & \to & \pi_1({\mathrm{Spin}}(\Pi_{ij})) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})) \notag \\ \label{homotopylong}\end{aligned}$$ from which one extracts the exact sequence $$\{1\} = \pi_1({\mathrm{Spin}}(\Pi_{ij})) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)) \rightarrow \pi_1({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})).$$ By Corollary \[K/K\_J is simply connected.\] and Lemma \[homeo Spin(Pi)/Spin(Pi\_[ij]{}) and K/K\_[ij]{}\] one has $ \pi_1({\mathrm{Spin}}(\Pi)/{\mathrm{Spin}}(\Pi_{ij})) {\cong}\pi_1(K/K_{ij}) = \{1\}$ and so by exactness $\pi_1({\mathrm{Spin}}(\Pi)) = \{ 1 \}$.
The second assertion follows from the fact that $\pi_1(G) {\cong}\pi_1(K)$ by Corollary \[FundamentalGroups2\] and the fact that ${\mathrm{Spin}}(\Pi)$ is a 2-fold central extension of $K$ by [@GHKW Theorem 11.17].
We will now return to the case of a general irreducible Dynkin diagram $\Pi$.
\[HJ\] For a subset $J \subseteq I$ let $$H_J:= \left\langle x_i; \quad i \in J \mid x_ix_j^{{\varepsilon}(i,j) } = x_jx_i,; \quad i,j \in J\right\rangle.$$
\[connected component subgroups\] Let $J \subseteq I$ be the index set of a connected component ${\Pi^{\mathrm{adm}}}_J$ of ${\Pi^{\mathrm{adm}}}$. Then the following hold:
1. If ${\Pi^{\mathrm{adm}}}_J$ has colour $r$, then $H_J {\cong}C_2^{|J|}$.
2. If ${\Pi^{\mathrm{adm}}}_J$ has colour $g$, then $|J| = 1$ and $H_J {\cong}{\mathbb{Z}}$.
3. If ${\Pi^{\mathrm{adm}}}_J$ has colour $b$, then $|H_J| = 2^{|J|+1}$.
(a): If ${\Pi^{\mathrm{adm}}}_J$ has colour $r$, then there exist $i \in J, j \in I \setminus\{i\}$ with ${\varepsilon}(i,j) =1$ and ${\varepsilon}(j,i) = -1$. This implies $x_ix_j = x_j x_i$ and $x_j x_i^{-1} = x_i x_j$ which yields $x_i^2 = 1$. Now, if $\{i^\lambda, k^\lambda\}$ is an edge in ${\Pi^{\mathrm{adm}}}$, then $x_i x_k^{-1}x_i^{-1} x_k^{-1} = 1 = x_k x_i^{-1} x_k^{-1} x_i^{-1}$. Multiplying these expressions shows that $x_i^2 = 1$ implies $x_k^2 = 1$. Since ${\Pi^{\mathrm{adm}}}_J$ is connected, this yields $x_k^2 = 1$ for each $k \in J$. Commutativity then follows from the relations of $H_J$.
(b): By definition, two nodes $i^\lambda$ and $j^\lambda$ in ${\Pi^{\mathrm{adm}}}$ are connected if and only if ${\varepsilon}(i,j) = {\varepsilon}(j,i) = -1$, and by definition a node $i^\lambda$ has color $g$ if and only if for each node $j^\lambda$ this is not the case – namely, one of ${\varepsilon}(i,j)$ and ${\varepsilon}(j,i)$ is 1. Nodes of colour $g$ are therefore isolated in ${\Pi^{\mathrm{adm}}}$.
(c): Let $\Pi_J^{\mathrm{sl}}$ be the simply laced Dynkin diagram with vertex set $J^\lambda$ and edge set $\{ \{i,j\} \in J \times J \mid \{i,j\} \text{ edge in }{\Pi^{\mathrm{adm}}}\}$.
Let $\tilde{T}:= K(\Pi_J^{\mathrm{sl}}) \cap T(\Pi_J^{\mathrm{sl}})$ where $T(\Pi_J^{\mathrm{sl}})$ denotes the standard maximal torus of $G(\Pi_J^{\mathrm{sl}})$. Then by Lemma \[degree of K to K/T covering\] and Proposition \[simplyconnected\], ${\mathrm{Spin}}(\Pi_J^{\mathrm{sl}}) \to K(\Pi_J^{\mathrm{sl}}) \to K(\Pi_J^{\mathrm{sl}})/\tilde{T}$ is a universal covering map where $K(\Pi_J^{\mathrm{sl}}) \to K(\Pi_J^{\mathrm{sl}})/\tilde{T}$ has degree $2^{|J|}$ and ${\mathrm{Spin}}(\Pi_J^{\mathrm{sl}}) \to K(\Pi_J^{\mathrm{sl}})$ has degree $2$ according to [@GHKW Theorem 11.17]. Since $\pi_1(K(\Pi_J^{\mathrm{sl}})/\tilde{T}) {\cong}H_J$ by Theorem \[fundamental group\], this implies $|H_J| = 2^{|J|+1}$.
\[fundamental group as product\] Let $J_1 \sqcup \dots \sqcup J_k = I$ be the index sets of the connected components of ${\Pi^{\mathrm{adm}}}$. If the Bruhat decomposition satisfies the conclusion of Proposition \[bruhat decomp is cw decomp\], then $$\pi_1(G/B) {\cong}H_{J_1} \times \dots \times H_{J_K}.$$
By Theorem \[fundamental group\], $\pi_1(G/B) {\cong}H_I$ where $$H_I = \left\langle x_i; \quad i \in I \mid x_ix_j^{{\varepsilon}(i,j) } = x_jx_i,; \quad i,j \in I\right\rangle$$ as defined in \[HJ\]. For $J \subseteq I$, let $$\label{R_J}
R_J:= \{x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1} \mid i,j \in J\},$$ the set of relators of $H_J$. Let $$R^c:= \bigcup_{\substack{i^\lambda, j^\lambda \text{in different}\\\text{conn. components}}}\{x_ix_j x_i^{-1} x_j^{-1}\},$$ the set of commutators of pairs of generators from different connected components. Then $$H_{J_1} \times \dots \times H_{J_k} {\cong}\left \langle x_i; \quad i \in I \mid \bigcup_{l=1}^k R_{J_l} \cup R^c \right\rangle =: H.$$ Let $\pi_{H_I}$ and $\pi_{H}$ be the canonical homomorphisms from the free group $\langle x_i; \quad i \in I\rangle$ to $H_I$ and $H$, respectively. It suffices to show that $\bigcup_{l=1}^k R_{J_l} \cup R^c \subseteq \ker \pi_{H_I}$ and $R_{I} \subseteq \ker \pi_H$. It is clear that a relator $x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1} \in R_I$ with $i^\lambda$ and $j^\lambda$ in a common connected component is contained in $\bigcup_{l=1}^k R_{J_l} \subseteq \ker \pi_H$, so let $x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1} \in R_I$ with $i^\lambda$ and $j^\lambda$ in different connected components. Then one has $({\varepsilon}(i,j), {\varepsilon}(j,i)) \in \{ (1,1), (1,-1), (-1,1)\}$. If ${\varepsilon}(i,j) = 1$, then $x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1} \in R^c \subseteq \ker \pi_H$, so let ${\varepsilon}(i,j) = -1$ and ${\varepsilon}(j,i) = 1$. Then $j^\lambda$ is contained in a connected component ${\Pi^{\mathrm{adm}}}_{J_m}$ of colour $r$, and by Lemma \[connected component subgroups\], $\langle x_l; \quad l \in J_m \mid R_{J_m} \rangle = H_{J_m} {\cong}C_2^{|J_l|}$.
This implies that $x_j$ has order $2$ in ${H_{J_m}}$, hence $x_j^2 \in \langle \langle R_{J_m} \rangle \rangle_{\langle x_i; i \in I\rangle}$, the normal closure of $R_{J_m}$ in the free group.
Since $\langle \langle R_{J_m} \rangle \rangle_{\langle x_i; i \in I\rangle} \subseteq \ker \pi_H$, one obtains $x_j^2 \in \ker \pi_H$. Since $x_j x_i x_j^{-1} x_i^{-1} \in R^c \subseteq \ker \pi_H$ and ${\varepsilon}(i,j) = -1$, one therefore has $$\pi_H(x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1}) = \pi_H (x_j x_i x_j^{-1} x_i^{-1} \cdot x_ix_j^{{\varepsilon}(i,j)}x_i^{-1} x_j^{-1} ) = 1_H.$$ Conversely, it is clear that $\bigcup_{l=1}^k R_{J_l} \subseteq R_{I} \subseteq \ker \pi_{H_I}$, so let $x_i x_j x_i^{-1} x_j^{-1} \in R^c$ with $i^\lambda$ and $j^\lambda$ in different connected components. As above, we can assume that ${\varepsilon}(i,j) = -1$ and ${\varepsilon}(j,i) = 1$. Since $x_j x_i^{{\varepsilon}(j,i)} x_j^{-1} x_i^{-1} \in \ker \pi_{H_I}$, this implies $$\pi_{H_I}(x_i x_j x_i^{-1} x_j^{-1}) = \pi_{H_I}(x_j x_i^{{\varepsilon}(j,i)} x_j^{-1} x_i^{-1} \cdot x_i x_j x_i^{-1} x_j^{-1}) = 1_{H_I}.$$ This proves the assertion.
\[fundamental group of K\] Let $\Pi$ be an irreducible Dynkin diagram such that $G(\Pi)$ satisfies the conclusions of Proposition \[bruhat decomp is cw decomp\] and of Theorem \[FundamentalGroups2\]. Let $n(g)$ and $n(b)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ of colour $g$ and $b$, respectively. Then $$\pi_1(G(\Pi)) {\cong}{\mathbb{Z}}^{n(g)} \times C_2^{n(b)}.$$ In particular, this statement holds in the symmetrizable case.
By Theorem \[FundamentalGroups2\], $\pi_1(G) {\cong}\pi_1(K)$, so it suffices to prove that $\pi_1(K)$ is of the given isomorphism type; note that Theorem \[FundamentalGroups2\] has only been established in the symmetrizable case. Let $J \subseteq I$. The diagram $$\begin{tikzcd}
K \arrow[r, "{\varphi}"] \arrow[d, "p"] & K/K_J \arrow[d, "q"] \\
K/(K \cap T)\arrow[r, "\psi"] & K/(K \cap T)K_J \end{tikzcd},$$ with all maps being the respective canonical maps, commutes. Since the maps are continuous by Lemma \[quotient basics\], one obtains a commutative diagram of induced homomorphisms $$\begin{tikzcd}
\pi_1(K) \arrow[r, "{\varphi}_*"] \arrow[d, "p_*"] & \pi_1(K/K_J) \arrow[d, "q_*"] \\
\pi_1(K/(K \cap T))\arrow[r, "\psi_*"]& \pi_1(K/(K \cap T)K_J) \end{tikzcd},$$ where $p_*$ and $q_*$ are injective. By Theorem \[fundamental group\] and Lemma \[homeo K/(K cap P\_J) to G/P\_J\], $\pi_1(K/(K \cap T))$ and $\pi_1(K/(K \cap T)K_J)$ can be identified with $H_I = \langle x_i; \quad i \in I \mid R_I\rangle$ and $\langle x_i; \quad i \in I \mid R_I \cup \{x_J \mid j \in J\}\rangle$, respectively ($R_I$ as in (\[R\_J\]) in the above proof), where $\psi_*$ corresponds to the canonical homomorphism between these groups as the proof of Theorem \[fundamental group\] shows.
For the index set $J_m$ of a connected component of ${\Pi^{\mathrm{adm}}}$, let $\bar{J}_m:= I \setminus J_m$. Then by Proposition \[fundamental group as product\], $$\left\langle x_i; \quad i \in I \mid R_I \cup \{x_J \mid j \in \bar{J}_m\}\right \rangle {\cong}\left(\prod_{i=1}^k{H_{J_i}}\middle/\prod_{\substack{i=1\\i \neq m}}^k{H_{J_i}}\right) {\cong}H_{J_m}.$$ Summing up, one obtains a commutative diagram $$\begin{tikzcd}
\pi_1(K) \arrow[r, "{\varphi}_*"] \arrow[d, "p_*"] & \pi_1(K/K_{\bar{J}_m}) \arrow[d, "q_*"] \\
\prod_{i=1}^k{H_{J_i}} \arrow[r, "\pi_m"]& H_{J_m} \end{tikzcd},$$ having replaced $p_*$ and $q_*$ from above with the correponding monomorphisms.
By Lemma \[degree of K to K/T covering\], the covering $K/K_{\bar{J}_m} \to K/K_{\bar{J}_m}(K \cap T)$ has degree $2^{n-|\bar{J}_m|} = 2^{|J_m|}.$ This implies that $\tilde{H}_m:= q_*(\pi_1(K/K_{\bar{J}_m}))$ is a subgroup of $H_{J_m}$ of index $2^{|J_m|}$. The isomorphism type of $\tilde{H}_m$ is uniquely determined by this index and Lemma \[connected component subgroups\]: One has $$\tilde{H}_m {\cong}\begin{cases} \{1\}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }r,\\
2{\mathbb{Z}}{\cong}{\mathbb{Z}}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }g,\\
C_2 & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }b. \end{cases}$$ Again by Lemma \[degree of K to K/T covering\], the covering $K \to K/(K \cap T)$ has degree $2^n$, so $p_*(\pi_1(K))$ is a subgroup of index $2^n$ of $\prod_{i=1}^k{H_{J_i}}$. The commutative diagram above implies that $\pi_1(K) {\cong}p_*(\pi_1(K)) \subseteq \pi_m^{-1}(\tilde{H}_m)$. Since this holds for the index set of every connected component of ${\Pi^{\mathrm{adm}}}$, one has $p_*(\pi_1(K)) \subseteq \tilde{H}_1 \times \dots \times \tilde{H}_m$. But the latter is a subgroup of index $2^{|J_1|}\cdot \dots \cdot 2^{|J_m|} = 2^n$ of $\prod_{i=1}^k{H_{J_i}}$, so equality holds. This proves the assertion.
\[fundamental group of Spin\] Let $\Pi$ be an irreducible Dynkin diagram such that $G(\Pi)$ satisfies the conclusion of Proposition \[bruhat decomp is cw decomp\]. Let $n(g)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ of colour $g$. Let $n(b,\kappa)$ be the number of connected components of ${\Pi^{\mathrm{adm}}}$ on which $\kappa$ takes the value 1 and which have colour $b$. Then $$\pi_1({\mathrm{Spin}}(\Pi,\kappa)) {\cong}{\mathbb{Z}}^{n(g)} \times C_2^{n(b,\kappa)}.$$ In particular, this statement holds in the two-spherical and the symmetrizable case.
By [@GHKW Theorem 17.1], the map $ \rho_{\Pi,\kappa}:{\mathrm{Spin}}(\Pi, \kappa) \to K$ is a $2^{c(\Pi, \kappa)}$-fold central extension. Let $J$ be the index set of a connected component of ${\Pi^{\mathrm{adm}}}$ and let $\bar{J}:= I \setminus \bar{J}$. Let $U_{\bar{J}} := \langle \tilde{G}_{ij} \mid i \neq j \in J \rangle_{{\mathrm{Spin}}(\Pi,\kappa)}$.
Since $\rho_{\Pi,\kappa}(U_{\bar{J}}) \subseteq K_J$, one has a continuous induced map $\rho_{\Pi,\kappa}^J: {\mathrm{Spin}}(\Pi,\kappa) / {\mathrm{Spin}}(\Pi_{\bar{J}},\kappa_{\bar{J}}) \to K/K_{\bar{J}}$ making the following diagram commute, where $\tilde{{\varphi}}$ and ${\varphi}$ denote the respective canonical maps: $$\begin{tikzcd}
{\mathrm{Spin}}(\Pi, \kappa) \arrow[r, "\tilde{{\varphi}}"] \arrow[d, " \rho_{\Pi,\kappa}"] & {\mathrm{Spin}}(\Pi,\kappa) /U_{\bar{J}} \arrow[d, "\rho_{\Pi,\kappa}^J"] \\
K \arrow[r, "{\varphi}"] & K/K_{\bar{J}}
\end{tikzcd}$$
Each fiber of $\rho_{\Pi,\kappa}^J$ has cardinality $$\begin{aligned}
|\{xU_{\bar{J}} \mid x \in \ker \rho_{\Pi,\kappa}\}| &= |\ker(\rho_{\Pi,\kappa}) /(U_{\bar{J}} \cap \ker(\rho_{\Pi,\kappa}))|\\
& = 2^{c(\Pi, \kappa) - c(\Pi_{\bar{J}},\kappa_{\bar{J}}) }\text{ by Remark \ref{Spin remark}}\end{aligned}$$ Since $\rho_{\Pi,\kappa}$ is open as a covering map and ${\varphi}$ is open by Lemma \[quotient basics\], it follows from Lemma \[covering basics\] that $\rho_{\Pi,\kappa}^J$ is a covering map.
From here the proof is analogous to the proof of Theorem \[fundamental group of K\], after extending the commutative diagram at the beginning of the latter proof: $$\begin{tikzcd}
{\mathrm{Spin}}(\Pi, \kappa) \arrow[r, "\tilde{{\varphi}}"] \arrow[d, " \rho_{\Pi,\kappa}"] & {\mathrm{Spin}}(\Pi,\kappa) / U_{\bar{J}} \arrow[d, "\rho_{\Pi,\kappa}^J"] \\
K \arrow[r, "{\varphi}"] \arrow[d, "p"] & K/K_J \arrow[d, "q"] \\
K/(K \cap T)\arrow[r, "\psi"] & K/(K \cap T)K_J \end{tikzcd}.$$ One obtains that $\pi_1({\mathrm{Spin}}(\Pi, \kappa)) {\cong}\prod_{i = 1}^k H'_{J_i}$ where each $H'_{J_m}$ is a subgroup of index $2^{c(\Pi, \kappa) - c(\Pi_{\bar{J}_m},\kappa_{\bar{J}_m})}$ of $$\tilde{H}_m {\cong}\begin{cases} \{1\}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }r,\\
2{\mathbb{Z}}{\cong}{\mathbb{Z}}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }g,\\
C_2 & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }b. \end{cases}$$ Since ${\Pi^{\mathrm{adm}}}_{\bar{J}_m}$ is the union of all connected components except ${\Pi^{\mathrm{adm}}}_{J_m}$, one has $c(\Pi, \kappa) - c(\Pi_{\bar{J}_m},\kappa_{\bar{J}_m}) \in \{0,1\}$, depending on whether $\kappa$ is constant $1$ or $2$ on ${\Pi^{\mathrm{adm}}}_{J_m}$. This implies $${H}'_m {\cong}\begin{cases} \{1\}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }r,\\
{\mathbb{Z}}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }g,\\
C_2 & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }b\text{ and }\kappa\equiv 1\text{ on }{\Pi^{\mathrm{adm}}}_{J_m},\\
\{1\}, & \text{if }{\Pi^{\mathrm{adm}}}_{J_m}\text{ has colour }b\text{ and }\kappa\equiv 2\text{ on }{\Pi^{\mathrm{adm}}}_{J_m}.
\end{cases}$$ This proves the assertion.
Now all theorems from the introduction have been proved.
Maximal unipotent subgroups of Kac–Moody groups and applications to Kac–Moody symmetric spaces (by Tobias Hartnick and Ralf Köhl)
=================================================================================================================================
Throughout this appendix we fix a symmetrizable generalized Cartan matrix ${\bf A}$ with underlying diagram $\Pi$. We consider the corresponding algebraically simply-connected semisimple split real Kac–Moody group $G_{\mathbb{R}}({\mathbf{A}})$ as given by Definition \[Kac–Moody group\] and its commutator subgroup $G:=G(\Pi) = [G_{\mathbb{R}}({\mathbf{A}}),G_{\mathbb{R}}({\mathbf{A}})]$. As in Section \[Split-real Kac–Moody groups\] we also denote by $K_{\mathbb{R}}({\mathbf{A}}) \leq G_{\mathbb{R}}({\mathbf{A}})$ the fixed point subgroup of the Cartan–Chevalley involution $\theta$ and set $K := K(\Pi) = K_{\mathbb{R}}({\mathbf{A}}) \cap G$. We equip all of these groups with the restrictions of the Kac–Peterson topology.
The goal of this appendix is to relate the topology of $G$ to the topology of $K$. Our main result (see Theorem \[FundamentalGroups2\] below) asserts that the inclusion $K \hookrightarrow G$ is a weak homotopy equivalence. This implies in particular that $\pi_1(G) \cong \pi_1(K)$ and thus allows the computation of $\pi_1(G)$ by the methods presented in the main part of the article.
In the spherical case the subgroup $K < G$ is even a deformation retract and hence the inclusion $K \hookrightarrow G$ is a homotopy equivalence, as a consequence of the topological Iwasawa decomposition of $G$. This decomposition also implies that the associated Riemannian symmetric space $G/K$ is contractible.
While real Kac–Moody groups also possess an Iwasawa decomposition, it is currently unknown whether this decomposition is topological. To establish our main result we thus have to work with a certain central quotient $\overline{G}$ of $G$, for which the topological Iwasawa decomposition was established in [@FHHK]. We will show that the image $\overline{K}$ of $K$ in $\overline{G}$ is a strong deformation retract and that the reduced Kac–Moody symmetric space $\overline{G}/\overline{K}$ is contractible. Since the finite-dimensional central extension $G \to \overline{G}$ is a Serre fibration by a classical result of Palais [@Pal], this will allow us to deduce the desired result about $G$ and $K$.
The topological Iwasawa decomposition
-------------------------------------
Let us denote by $\operatorname{Ad}: {G}_{\mathbb{R}}({\mathbf{A}}) \to \operatorname{Aut}({\mathfrak{g}}_{\mathbb{R}}({\mathbf{A}}))$ and $\operatorname{Ad}: G(\Pi) \to \operatorname{Aut}({\mathfrak{g}}_{\mathbb{R}}'({\mathbf{A}}))$ the adjoint representations of $ {G}_{\mathbb{R}}({\mathbf{A}})$ and $G = G(\Pi)$ respectively. We recall from [@FHHK] that the quotient map $G \to {\rm Ad}(G)$ factors as $$\label{AdjointSemisimpleQuotient}
G \xrightarrow{p_1} \overline{G} \xrightarrow{p_2} {\rm Ad}(G),$$ where $\overline{G}$ is uniquely determined by the fact that $\overline{T} := p_1(T) \cong ({\mathbb{R}}^\times)^{{\rm rk}({\mathbf{A}})}$ is a torus and $p_2$ has finite kernel. The group $\overline{G}$ is referred to as the *semisimple adjoint quotient* of $G$, and we equip it with the quotient topology with respect to the Kac–Peterson topology on $G$. We will denote by $U^{\pm)}$ the positive, respectively negative maximal unipotent subgroup $U^\pm$ of $G(\Pi)$ as introduced in Section \[Split-real Kac–Moody groups\]. Also recall from Section \[Split-real Kac–Moody groups\] that $A_{\mathbb{R}}:= \exp({\mathfrak{h}}_{\mathbb{R}}({\mathbf{A}})) \leq G_{\mathbb{R}}({\mathbf{A}})$ and set $A:= A_{\mathbb{R}}\cap G$.
Multiplication induces continuous bijections $$K_{\mathbb{R}}({\mathbf{A}}) \times A_{\mathbb{R}}\times U^+ \to G_{\mathbb{R}}({\mathbf{A}}) \quad \text{and} \quad K \times A \times U^+ \to G.$$
A more refined statement has been established in [@FHHK] for the semisimple adjoint quotient $\overline{G}$ of $G$. To state this result, denote by $$G \xrightarrow{p_1} \overline{G} \xrightarrow{p_2} {\rm Ad}(G)$$ the canonical quotient maps from and set $\overline{K} := p_1(K)$, $\overline{T} := p_1(T) \cong ({\mathbb{R}}^\times)^{{\rm rk}(\mathbf A)}$, $\overline{A} := p_1(A) = \overline{T}^o$ and $\overline{U^+} := p_1(U^+)$. Equip these groups with their respective quotient topologies and note that $p_1$ restricts to a bijection between $U^+$ and $\overline{U^+}$.
\[TopIwasawa\] Multiplication induces homeomorphisms $$\overline{K} \times \overline{A} \times \overline{U^+} \to \overline{G} \quad \text{and} \quad \overline{U^+} \times \overline{A} \times\overline{K} \to \overline{G}.$$
Since $\overline{A}$ is contractible, in order to show that $\overline{K}$ is a deformation retract of $\overline{G}$ it will suffice to show that $\overline{U^+}$ is contracible. We thus need to understand the topology induced by the Kac–Peterson topology on the standard unipotent subgroups.
The Kac–Peterson topology on Upm
--------------------------------
We now turn to the study of the restriction of the Kac–Peterson topology to the standard maximal unipotent subgroups $U^-$ and $U^+$. Recall from Section \[Split-real Kac–Moody groups\] that the Weyl group $W$ is a Coxeter group, so elements of $W$ can be represented by reduced words in the generators $s_{1}, \dots, s_{r}$. Given such a reduced word $w =( s_{i_1}, \dots, s_{i_n})$ in $W$ with corresponding simple roots $\alpha_{i_1}, \dots \alpha_{i_n}$ we define positive roots $\beta_1, \dots, \beta_n$ by $$\label{betas}
\beta_1 := \alpha_{i_1}, \quad \beta_2 := s_{i_1}(\alpha_{i_2}), \quad \dots, \quad \beta_r := s_{i_1}s_{i_2} \cdots s_{i_{n-1}}(\alpha_{i_n}).$$ We then set $U_w := U_{\beta_1}\cdots U_{\beta n} \subset U^+$ and define a map $$\mu_w: U_{\beta_1} \times \dots \times U_{\beta_n} \to U_w, \quad (x_1, \dots, x_n) \mapsto x_1 \cdots x_n.$$ It is established in [@CapraceRemy Section 5.5, Lemma] that the map $\mu_w$ is a bijection for every reduced word $w$, and that its image $U_w$ depends only on the Weyl group element represented by $w$, but not on the chosen reduced expression. Since ${G}_{\mathbb{R}}({\mathbf{A}})$ is a topological group, the bijection $\mu_w$ is continuous. In fact, one can show that $\mu_w$ is a homeomorphism. A proof of this fact was sketched in [@HKM Lemma 7.25]; since openness of the maps $\mu_w$ is crucial for everything that follows, we fill in the details of this sketch here:
For every reduced word $w$ the map $\mu_w$ is a homeomorphism onto its image.
We argue by induction on the length $m$ of $w$ and observe that the case $m=1$ holds by definition. Since the linear functionals $\alpha_1, \dots, \alpha_r$ are linearly independent, there exists an element $X \in \mathfrak a$ such that $\alpha_{i_1}(X) = 0$ and $\alpha_{j}(X) < 0$ for all $j \in \{1, \dots, \widehat{i_1}, \dots, r\}$. It follows that $\beta_1(X) = \alpha_{i_1}(X) = 0$ and $\beta_k(X) < 0$ for all $k =2, \dots, m$. Indeed, since the word $w$ is reduced, none of the positive real roots $\beta_2, \dots, \beta_m$ equals $\alpha_{i_1}$, and since $n\alpha_{i_1}$ is not a root for any $n \geq 2$ (cf. [@Kac2 Proposition 5.1]), each of them contains at least one other positive simple root as a summand. Now for $j\in \{1, \dots, m\}$ and $Y \in \mathfrak g_{\beta_j}$ we have ${\rm ad}(X)(Y) = \beta_j(X)(Y)$, and thus $$\lim_{t \to \infty}{\rm Ad}(\exp(tX))(Y) = \left\{\begin{array}{ll}Y, & j = 1,\\ 0, & j>1. \end{array}\right.$$ We conclude that if $x_j \in U_{\beta_j}$, then $$\lim_{t \to \infty} \exp(tX) (x_1 \dots x_m) \exp(-tX) = x_1,$$ where the convergence is uniform on compacta. This shows that the map $$\pi_1: U_w \to U_{\beta_1}, \quad x_1 \cdots x_m \mapsto x_1$$ is continuous, and hence the map $$\label{InductionStepUw}
U_w \to U_{\beta_1} \times U_{\beta_2} \cdots U_{\beta_m}, \quad x_1 \cdots x_m \mapsto (x_1, x_2\cdots x_m)$$ is continuous. Now let $w' = (r_{i_2}, \dots, r_{i_m})$ and let $\beta_2' = s_{i_1}(\beta_2)$, …, $\beta_m' := s_{i_1}(\beta_m)$. Now by Axiom (RGD2) of an RGD system there exists an element $g \in G_{\mathbb{R}}({\mathbf{A}})$ such that $gU_{\beta_j} g^{-1} = U_{\beta_j'}$ for all $j=2, \cdots, m$, and by induction hypothesis we have a homeomorphism $$\mu_{w'}: U_{\beta_2'} \times \dots \times U_{\beta_m'} \to U_{w'}, \quad (x_{2}, \dots, x_m) \mapsto x_{2} \cdots x_m.$$ Conjugating the inverse of this homeomorphism by $g^{-1}$ we obtain a homeomorphism $$U_{\beta_2} \cdots U_{\beta_m} \to U_{\beta_2} \times \dots \times U_{\beta_m}.$$ Composing this homeomorphism with the map now provides the desired continuous inverse to $\mu_w$.
To describe the topology on $U^+$ we recall that there exist several distinct but related partial orders on $W$ which in different places in the literature are referred to as the *Bruhat order* on $W$. In the sequel we will consider the following version; here $\ell$ denotes the length function with respect to the generating set $\{r_{1}, \dots, r_n\}$.
The *weak right Bruhat order* on $W$ is the partial order $\leq_w$ defined as $$w_1 \leq_w w_2 \quad :\Longleftrightarrow \quad \ell(w_2) = \ell(w_1) + \ell(w_1^{-1}w_2).\quad (w_1, w_2 \in W)$$
According to [@CapraceRemy p. 44] we have $w_1 \leq_w w_2$ if and only if there exists a reduced word $(r_{i_1}, \dots, r_{i_{\ell(w_2)}})$ for $w_2$ such that $w_1 = r_{i_1} \cdots r_{i_{\ell(w_1)}}$.
Recall that for the strong Bruhat order $\leq$ one has $w_1 \leq w_2$ if there exists a reduced word $(r_{i_1}, \dots, r_{i_m})$ for $w_2$ and a reduced word $(r_{j_1}, \dots, r_{j_l})$ for $w_1$ such that $(r_{j_1}, \dots, r_{j_l})$ is a substring of $(r_{i_1}, \dots, r_{i_m})$ (not necessarily consecutive). By definition, $$w_1 \leq_w w_2 \quad \Longrightarrow \quad w_1 \leq w_2,$$ but the converse is not true. An important difference between the weak right Bruhat order and the strong Bruhat order is that $(W, \leq)$ contains a cofinal chain, i.e., a totally ordered subset $T\subset W$ such that for every $w \in W$ there exists $t \in T$ such that $w \leq t$, whereas for the weak right Bruhat order, such a cofinal chain does not exist. In fact, given $w_1, w_2 \in W$ there will in general not exist an element $w_3 \in W$ with $w_1 \leq_w w_3$ and $w_2 \leq_w w_3$.
Note that if $w_1 \leq_w w_2$, then we can choose a reduced word $(r_{i_1}, \dots, r_{i_{\ell(w_2)}})$ for $w_2$ such that $w_1 = r_{i_1} \cdots r_{i_{\ell(w_1)}}$. Thus if we define $\beta_1, \dots, \beta_{\ell(w_2)}$ as above then we have a commuting diagram $$\begin{xy}\xymatrix{
U_{\beta_1} \times \dots \times U_{\beta_{\ell(w_1)}} \ar[rrr] \ar[d]&&&U_{\beta_1} \times \dots \times U_{\beta_{\ell(w_2)}}\ar[d]\\
U_{w_1} \ar[rrr] &&& U_{w_2},
}\end{xy}$$ where the horizontal maps are inclusions, and the vertical maps are homeomorphisms. In particular, we have a continuous inclusion $\iota_{w_1}^{w_2}: U_{w_1} \hookrightarrow U_{w_2}$, hence we may form the colimit $$\lim_{\to}((U_w)_{w \in W}, (\iota_{w_1}^{w_2})_{w_1 \leq_w w_2})$$ in the category of topological spaces. We emphasize that in view of the previous remark the system $((U_w)_{w \in W}, (\iota_{w_1}^{w_2})_{w_1 \leq_w w_2})$ is *not* directed, hence this colimit is not a direct limit.
\[directlimit\] The $k_\omega$-space $U^+$ is given by the colimit $$U^+ = \lim_{\to}((U_w)_{w \in W}, (\iota_{w_1}^{w_2})_{w_1 \leq_w w_2})$$ both in the category of topological spaces and in the category of $k_\omega$-topological spaces.
The corresponding statement in the category of sets is established in [@CapraceRemy Theorem 5.3]. For the topological statement see [@HKM Proposition 7.27].
In view of the applications to Kac–Moody symmetric spaces that we have in mind we recall that $U^{\pm}$ are subgroups of the commutator subgroup $G$ of $G_{\mathbb{R}}({\mathbf{A}})$, in particular we can consider their images $\overline{U}^\pm := p_1(U^{\pm})$ under the map $p_1: G \to \overline{G}$ from . In this context we will need the following fact:
\[UvsUbarTopology\] The map $p_1$ induces homeomorphisms $U^{\pm} \to \overline{U}^\pm$.
By [@HKM Proposition 7.27] the map $T \times U^+ \to TU^+$ is a homeomorphisms and the kernel of $p_1$ is contained in $T$. The latter implies that $p_1$ restricts to a continuous bijection $U^+ \to \overline{U^+}$, and the former implies that this bijection is open.
Dilation structures on Upm
--------------------------
Let $U$ be a topological group. By a *dilation structure* on $U$ we mean a family of maps $(\Phi_t: U \to U)_{t \in {\mathbb{R}}}$ with the following properties:
1. Each $\Phi_t$ is a continuous group automorphisms of $U$.
2. $(\Phi_t)_{t \in {\mathbb{R}}}$ is a one-parameter group, i.e. $\Phi_0 = {\rm Id}$ and $\Phi_{s+t} = \Phi_s \circ \Phi_t$ for all $s, t\in {\mathbb{R}}$.
3. If we define $\Phi_{-\infty}: U \to U$ by $\Phi_{-\infty}(u) := e$, then the map $$[-\infty, \infty) \times U \to U, \quad (t, u) \mapsto \Phi_t(u)$$ is continuous.
\[dilation implies contractible\] Note that if a topological group $U$ admits a dilation structure, then it is in particular contractible. Indeed, if we define $\Psi_t := \Phi_{\frac{t}{t-1}}$, then $$\Psi: [0, 1] \times U \to U, \quad (t,u) \mapsto \Psi_t(u)$$ is continuous with $\Psi_0 = \Phi_0= {\rm Id}$ and $\Psi_1 = \Phi_{-\infty}$, hence a contraction to the identity.
Dilation structures on finite-dimensional simply-connected nilpotent Lie groups play a major role in conducting analysis on such groups, see e.g. [@Goodman]. Not every finite-dimensional simply-connected nilpotent Lie group admits a dilation structure, but if $U$ is the unipotent radical of a minimal parabolic subgroup of a semisimple Lie group, then such a dilation structure always exists. The methods of [@Kum] allow one to extend this result to the Kac–Moody setting.
Following [@Kac2 §3.12], we define the *fundamental chamber* of $ \mathfrak{h}_{\mathbb{R}}(\mathbf{A})$ as $$C := \{ h \in \mathfrak{h}_{\mathbb{R}}(\mathbf{A}) \mid \forall 1 \leq i \leq n: \alpha_i(h) \geq 0 \} \subset \mathfrak{h}_{\mathbb{R}}(\mathbf{A}).$$ Since the family $(\alpha_i)_{1 \leq i \leq n}$ is linearly independent, there exists $$X_0 \in C \text{ such that } \alpha_i(X_0) = 1 \text{ for all } 1 \leq i \leq n.$$ Indeed, by the linear independence of $(\alpha_i)_{1 \leq i \leq n}$ the solution space for the system of $n-1$ linear equations $\forall 2 \leq i \leq n : \alpha_1(x) - \alpha_i(x) = 0$ has strictly larger dimension than the solution space for the system of $n$ linear equations $\forall 1 \leq i \leq n : \alpha_i(x) = 0$.
We now define a one-parameter subgroup of $A_{\mathbb{R}}$ by $a_t := \exp(tX_0)$ and denote by $${\varphi}_t := {\rm Ad}(a_t) \in {\rm Aut}(\mathfrak u^+)$$ the associated automorphism of $\mathfrak u^+$. Similarly we denote by $$\Phi_t := c_{a_t}|_{U^+} \in {\rm Aut}(U^+)$$ the restriction of the conjugation-action of $a_t$ on $G_{\mathbb{R}}({\mathbf{A}})$ to $U^+$. Note that if $X \in \mathfrak u^+$ is ad-locally finite then $$\Phi_t(\exp(X)) = \exp({\varphi}_t(X)).$$ From and the defining property of $X_0$ one deduces that for every positive root $\alpha$ $$\forall Y \in \mathfrak g_\alpha: \; {\varphi}_t(Y) = e^{-t |\alpha|} Y.$$ It follows that for all positive roots $\alpha$. $$\begin{aligned}
\Phi_t(x_\alpha(s)) = \exp(-tX_0) \cdot x_\alpha(s) \cdot \exp(tX_0) = x_\alpha(e^{-t |\alpha|}s),\label{contractionformula}\end{aligned}$$ (see [@Ti (4), p. 549]), where $\{ x_\alpha(s) \mid s \in \mathbb{R}\} \cong (\mathbb{R},+)$ is the root subgroup of $G_{\mathbb{R}}({\mathbf{A}})$ corresponding to the root space $\mathfrak{g}_\alpha$ As a consequence, if one endows each of the root subgroups $\{ x_\alpha(s) \mid s \in \mathbb{R}\}$ with the natural topology of $\mathbb{R}$, then $\Phi_t$ contracts each of them. We are now in a position to reproduce the following result and proof by Kumar:
\[kumar\] The family $(\Phi_t)_{t\in {\mathbb{R}}}$ defines a dilation structure on $U^+$.
Let $w$ be a reduced word and write $w = s_{i_1} \cdots s_{i_r} \in W$ with corresponding simple roots $\alpha_{i_1}, \dots \alpha_{i_r}$. Recall that multiplication induces a homeomorphism $$U_{\beta_1} \times \dots \times U_{\beta_r} \to U^w,$$ where the roots $\beta_1, \dots, \beta_r$ are given by $$\beta_1 := \alpha_{i_1}, \quad \beta_2 := s_{i_1}(\alpha_{i_2}), \quad \dots, \quad \beta_r := s_{i_1}s_{i_2} \cdots s_{i_{r-1}}(\alpha_{i_r}).$$ Given an element $x_{\beta_1}(y_1)x_{\beta_2}(y_2)\cdots x_{\beta_r}(y_r) \in U^w$ by one has $$\Phi_t(x_{\beta_1}(y_1)x_{\beta_2}(y_2)\cdots x_{\beta_r}(y_r)) = x_{\beta_1}(e^{-t |\beta_1|}y_1)x_{\beta_2}(e^{-t |\beta_2|}y_2)\cdots x_{\beta_r}(e^{-t |\beta_r|}y_r).$$ Setting $\Phi_{-\infty}(t):= e$ for all $t\in {\mathbb{R}}$, we deduce that the map $$\Phi|_{U^w}: [-\infty, \infty) \times U^w \to U^w, \quad u \mapsto \Phi_t(u)$$ is continuous and that $\Phi_0 = {\rm Id}_{U^w}$. Combining this with Proposition \[directlimit\] one deduces that the map $$\Phi: [-\infty, \infty) \times U^+ \to U^+, \quad u \mapsto \Phi_t(u)$$ is continuous, hence a dilation structure.
Recall that $U^+$ is isomorphic to $U^-$ under the Cartan–Chevalley involution of $G_{\mathbb{R}}({\mathbf{A}})$, which maps $a_t$ to $a_{-t}$. Thus if we define $\Phi^-_t := c_{a_{-t}}|_{U^-}$ then we obtain:
The family $(\Phi^-_t)_{t \in {\mathbb{R}}}$ defines a dilation structure on $U^-$.
Combining this with Remark \[dilation implies contractible\] and Proposition \[UvsUbarTopology\] we can record:
\[UContractible\] The topological groups $U^+$ and $U^-$ are contractible. Consequently, the groups $\overline{U}^+$ and $\overline{U}^-$ are contractible.
Homotopy groups of real-split semisimple Kac–Moody groups
---------------------------------------------------------
\[FundamentalGroups1\] The subgroup $\overline{K}< \overline{G}$ is a deformation retract. In particular the inclusion $i_K: K_{\mathbb{R}}({\mathbf{A}}) \hookrightarrow G_{\mathbb{R}}({\mathbf{A}})$ is a homotopy equivalence and thus induces isomorphisms $(i_K)_*: \pi_n(\overline{K}) \to \pi_n(\overline{G})$ for all $n \geq 0$.
We have established in Corollary \[UContractible\] that $\overline{U^+}$ is contractible, and $\overline{A}$ is contractible since it is homeomorphic to ${\mathbb{R}}^{{\rm rk}(\mathbf A)}$. The assertion now follows from Theorem \[TopIwasawa\].
Since it is currently unknown whether the Iwasawa decomposition of $G$ is also a topological decomposition, the strategy of the above proof can not be applied to $G$. However, using the following result of Palais [@Pal Section 4.1, Corollary], one can still obtain an isomorphism between the fundamental groups of $G$ and $K$.
\[Palais\] Let $G$ be a topological group and let $H< G$ be a subgroup which is homeomorphic to a Lie group. Then the fibration $H \hookrightarrow G \to G/H$ is locally trivial, in particular a Hurewicz fibration, hence there is a long exact sequence of homotopy groups $$\dots \to \pi_2(H) \to \pi_2(G) \to \pi_2(G/H) \to \pi_1(H) \to \pi_1(G) \to \pi_1(G/H) \to \pi_0(H) \to \pi_0(G).$$
Recall that the kernel of the quotient map $G \to \overline{G}$ is homeomorphic to $({\mathbb{R}}^\times)^{{\rm cork}(\mathbf A)}$. In particular it has $2^{{\rm cork}(\mathbf A)}$ connected components, whereas its higher homotopy groups vanish. Applying Lemma \[Palais\] to the diagram of fibrations $$\begin{xy}\xymatrix{
({\mathbb{R}}^\times)^{{\rm cork}(\mathbf A)}\ar[r] & G \ar[r] & \overline{G} \\
({\mathbb{Z}}/2{\mathbb{Z}})^{{\rm cork}(\mathbf A)}\ar[u]\ar[r] & K \ar[r]\ar[u] & \overline{K} \ar[u]
}\end{xy}$$ we thus obtain:
There is a commutative diagram with exact rows $$\begin{xy}\xymatrix{
0 \ar[r] & \pi_1(G) \ar[r] & \pi_1(\overline{G}) \ar[r] & ({\mathbb{Z}}/2{\mathbb{Z}})^{{\rm cork}(\bf A)} \ar[r] &0\\
0 \ar[r] & \pi_1(K) \ar[r] \ar[u]& \pi_1(\overline{K}) \ar[r] \ar[u]& ({\mathbb{Z}}/2{\mathbb{Z}})^{{\rm cork}(\bf A)} \ar[u]^\cong\ar[r] &0
}\end{xy}$$ Moreover, for $n \geq 2$ there are isomorphisms $\pi_n(G_{\mathbb{R}}({\mathbf{A}})) \cong \pi_n(G)$ and $\pi_n(K_{\mathbb{R}}({\mathbf{A}})) \cong \pi_n(K)$.
Combining this with Corollary \[FundamentalGroups1\] we deduce:
\[FundamentalGroups2\] For every $n \geq 0$ the inclusion $K \hookrightarrow G$ induces isomorphisms $$\pi_n(K) \hookrightarrow \pi_n(G),$$ hence is a weak homotopy equivalence. In particular, $\pi_1(G) \cong \pi_1(K)$.
Kac–Moody symmetric spaces and causal contractions
--------------------------------------------------
We conclude this appendix with an application to the results obtained so far to Kac–Moody symmetric spaces. It was established in [@FHHK] that the homogeneous spaces $G_{\mathbb{R}}({\mathbf{A}})/K_{\mathbb{R}}({\mathbf{A}})$ and $G/K$ carry the natural structure of topological reflection spaces, and the same holds for their quotients ${\rm Ad}(G_{\mathbb{R}}({\mathbf{A}}))/{\rm Ad}(K_{\mathbb{R}}({\mathbf{A}}))$ and ${\rm Ad}(G)/{\rm Ad}(K)$. The topological reflection space $\mathcal X = G/K$ is called the *unreduced Kac–Moody symmetric space* of type ${\mathbf A}$, and the topological reflection space $\overline{{\mathcal{X}}} = {\rm Ad}(G)/{\rm Ad}(K) = \overline{G}/\overline{K}$ is called the *reduced Kac–Moody symmetric space* of type ${\mathbf A}$.
\[Contrac1\] The reduced symmetric space $\overline{{\mathcal{X}}}$ is contractible.
In view of the topological Iwasawa decomposition the orbit map at the basepoint $o = e\overline{K}$ $$\overline{U^+} \times \overline{A} \to \overline{{\mathcal{X}}}, \quad (u,a) \mapsto ua.o$$ is a homeomorphism. Since $\overline{U^+}$ and $\overline{A}$ are contractible, this implies contractability of $\overline{{\mathcal{X}}}$.
The proof of Theorem \[kumar\] can be used to provide an explicit contraction for $\overline{{\mathcal{X}}} \simeq \overline{U^+} \times \overline{A}$, using the contraction by conjugation with suitable elements of the torus $T_{\mathbb{R}}$ on the group $\overline{U^+}$ and the standard contraction on the finite-dimensional real vector space $A$. It turns out that this contraction has interesting additional properties. Recall from [@FHHK Section 7] that the symmetric space $\overline{{\mathcal{X}}}$ admits future and past boundaries $\Delta^+_{\|}$ and $\Delta^-_{\|}$ that both carry a simplicial structure which turns them in the geometric realizations of the positive and negative halves of the twin building of $G_{\mathbb{R}}({\mathbf{A}})$. Following [@FHHK Section 7], a *causal ray* is a geodesic ray of $\overline{{\mathcal{X}}}$ whose parallelity class equals a point in $\Delta^+_{\|}$ and a *piecewise geodesic causal curve* is the concatenation of a finite set of segments of causal rays that can be parametrized in such a way that the walking direction always points towards the future boundary. Given $x, y \in \overline{{\mathcal{X}}}$ we say that $x$ *causally preceeds* $y$ (in symbols $x \preceq y$) if there exists a piecewise geodesic causal curve from $x$ to $y$.
Since both conjugation by elements of $T_{\mathbb{R}}$ and the standard contraction of the vector space $A$ preserve geodesic rays and the future and past boundaries (cf. [@FHHK Section 7]), the set of piecewise geodesic causal curves of $\overline{{\mathcal{X}}}$, and hence the causal pre-order $\preceq$, are invariant under the given contraction.
The reduced symmetric space $\overline{{\mathcal{X}}}$ is causally contractible, i.e., it admits a contraction that preserves $\preceq$.
The Bruhat decomposition is a CW decomposition (by Julius Grüning and Ralf Köhl)
================================================================================
Let $G$ be a Kac–Moody group endowed with the Kac–Peterson topology and let $T$ be the standard maximal torus and $U_+$, $U_-$ the standard unipotent subgroups. [@Kac1983 Theorem 4(a)] asserts without proof that the multiplication map $$U_+ \times T \times U_- \to U_+TU_-$$ is a homeomorphism with respect to the Kac–Peterson topology. In this note we provide a proof in the symmetrizable case that makes use of this fact in the two-spherical case ([@HKM Proposition 7.31]), of the embedding of Kac–Moody groups constructed in [@margabberkac Theorem 3.15(2)], and of the fact that the Kac–Peterson topology is $k_\omega$. Among the various consequences of this result is that the Bruhat decomposition of a symmetrizable topological Kac–Moody group is a CW decomposition.
\[komega\] A continuous proper map $f : X \to Y$ from a topological space $X$ to a $k$-space $Y$ is closed. In particular, a continuous injection $\iota : X \to Y$ into a $k_\omega$-space $Y=\bigcup_{n \in \mathbb{N}} Y_m$ such that for each $m \in \mathbb{N}$ the pre-image $\iota^{-1}(Y_m)$ is compact is a topological embedding, i.e., it is a homeomorphism onto its image.
The first statement is exactly [@Palais:1970 Corollary]. The second statement is an immediate consequence of the first, since a $k_\omega$-space is a $k$-space in which any compact subset $K$ of $Y$ is contained in some $Y_m$ of the ascending family $(Y_m)_{m \in \mathbb{N}}$ of compact subsets (statement (3) of [@kwsurvey]).
The authors thank Tobias Hartnick and Stefan Witzel for various lively discussions concerning the correct formulation and application of Proposition \[komega\]. Moreover, they thank Stefan Witzel for suggesting to make use of the concept of proper maps.
\[maximalbounded\] Let $G$ be a split real Kac–Moody group. Then the Kac–Peterson topology $\tau_{\mathrm{KP}}$ on $G$ equals the finest group topology $\tau_{\mathrm{MB}}$ on $G$ such that the embeddings of the maximal bounded subgroups, each endowed with its Lie group topology, are continuous.
By [@marfix Lemma 4.3], the Kac–Peterson topology $\tau_{\mathrm{KP}}$ on $G$ induces the Lie group topology on its maximal bounded subgroups. A fundamental ${\mathrm{SL}}_2(\mathbb{R})$ is bounded and, in particular, embeds as a closed subgroup into a maximal bounded subgroup. Therefore its subspace topology equals its Lie group topology; by [@HKM Proposition 7.21] the topology $\tau_{\mathrm{KP}}$ equals the finest group topology on $G$ such that the embeddings of the fundamental ${\mathrm{SL}}_2(\mathbb{R})$ Lie subgroups is continuous, whence $\tau_{\mathrm{KP}}$ is finer than or equal to the final group topology $\tau_{\mathrm{MB}}$ with respect to the embedded maximal bounded subgroups. Again, since by [@marfix Lemma 4.3] the Kac–Peterson topology on $G$ induces the Lie group topology on its maximal bounded subgroups, the two described topologies actually coincide.
\[maximalbounded2\] Let $G$ be a split real Kac–Moody group endowed with the Kac–Peterson topology and let $(G_i)_{i \in I}$ be a finite family of Lie-subgroups of $G$ such that each fundamental ${\mathrm{SL}}_2(\mathbb{R})$ is contained in at least one of the $G_i$. Then the Kac–Peterson topology on $G$ equals the finest group topology on $G$ such that the embeddings of the $(G_i)_i$, each endowed with its Lie group topology, are continuous.
\[topemb\] Any symmetrizable topological Kac–Moody group endowed with the Kac–Peterson topology admits a continuous injective group homomorphism into a simply laced topological Kac–Moody group with closed image with respect to the Kac–Peterson topology.
By [@margabberkac Theorem 3.15(2)] for any symmetrizable Kac–Moody group $G$ there is an injective group homomorphism $\iota : G \to H$ into a simply laced Kac–Moody group $H$ mapping $$x_{(i,\cdot)}(r) =\prod\limits_{j=1}^{n_i} x_{i,j}(r),$$ that is, embedding a fundamental ${\mathrm{SL}}_2(\mathbb{R})$ of $G$ diagonally into the direct product of a suitable (finite) family of fundamental ${\mathrm{SL}}_2(\mathbb{R})$ subgroups of $H$.
The restriction of this map to any fundamental rank one subgroup $G_\alpha$ of $G$ is continuous with respect to the Lie group topology on $G_\alpha$ and the Kac–Peterson topology on $H$. Hence, by universality (see [@HKM Proposition 7.21]), the map $\iota : G \to H$ is continuous with respect to the Kac–Peterson topology on both $G$ and $H$.
One has $$\iota(G)=\bigcap_\sigma \operatorname{Fix}({\varphi}_\sigma),$$ where ${\varphi}_\sigma$ is the automorphism of $H$ given by $$x_{i,j}(r)\mapsto x_{i,\sigma(j)}(r)$$ for some $ \sigma=(\sigma_1,\dots,\sigma_N)$, where $\sigma_i \in \mathrm{Sym}_{n_i}$. Since the automorphisms ${\varphi}_\sigma$ are continuous with respect to the Kac–Peterson topology on $H$, the group $\iota(G)$ is a closed subgroup of $H$.
\[maxboundedcut\] The embedding $\iota : G \to H$ corresponds to an embedding of the twin building $\Delta_G$ of $G$ into the twin building $\Delta_H$ of $H$ such that $\Delta_G = \bigcap_\sigma \operatorname{Fix}({\varphi}_\sigma)$ (with the ${\varphi}_\sigma$ now considered as twin building automorphisms) and the additional property that two chambers of $\Delta_G$ are opposite in $\Delta_G$ if and only if they are opposite in $\Delta_H$.
\[nondistortion\] Let $\iota : G \to H$ be the injective group homomorphism from Proposition \[topemb\], let $\Delta_G \to \Delta_H$ be the induced embedding of twin buildings, and $\mathrm{Opp}(\Delta_G) \to \mathrm{Opp}(\Delta_H)$ the resulting embedding of opposites geometries. Given $(c_+,c_-) \in \mathrm{Opp}(\Delta_G)$, for all $n \in \mathbb{N}$ exists $m \in \mathbb{N}$ such that the intersection of $\mathrm{Opp}(\Delta_G)$ with the ball of radius $n$ in $\mathrm{Opp}(\Delta_H)$ around $(c_+,c_-)$ is contained in the ball of radius $m$ in $\mathrm{Opp}(\Delta_H)$ around $(c_+,c_-)$.
By [@margabberkac Theorem 3.15(2)], each fundamental $G_{\alpha_i} \cong {\mathrm{SL}}_2(\mathbb{R})$ embeds diagonally into a product $\prod_{j=1}^{n_i} H_{\alpha_{i,j}} \cong {\mathrm{SL}}_2(\mathbb{R})^{n_i}$ of finitely many pairwise commuting fundamental subgroups of $H$. Since the diameter of a (twin) building and an opposites geometry of type ${A_1}^{n_i}$ is finite, in order to prove the claim of the proposition it suffices to observe that the $\mathrm{Opp}(\Delta_G)$-points of a union of bounded chains of such ${A_1}^{n_k}$-residues of $\mathrm{Opp}(\Delta_H)$ starting in $(c_+,c_-)$ actually lies in a finite $\mathrm{Opp}(\Delta_G)$-ball around $(c_+,c_-)$. However, this is obvious, because the only $\mathrm{Opp}(\Delta_G)$-points of such a ${A_1}^{n_k}$-residue come from the opposites geometry of the twin building of the diagonally embedded panel $\mathbb{P}_1(\mathbb{R}) \cong \mathbb{S}^1 \hookrightarrow \left(\mathbb{S}^1\right)^{n_i}$; in other words, each step by a ${A_1}^{n_k}$-residue in $\mathrm{Opp}(\Delta_H)$ between $\mathrm{Opp}(\Delta_G)$-points equals one step via an $\mathrm{Opp}(\Delta_G)$-panel of type $\alpha_k$.
\[multopen\] Let $G$ be a topological Kac–Moody group endowed with the Kac–Peterson topology. If it is two-spherical or symmetrizable, then the multiplication map $\varphi: U_+\times T \times U_- \to G$ is a homeomorphism.
The two-spherical case is [@HKM Proposition 7.31]. In the symmetrizable case note that Proposition \[komega\] is applicable since the Kac–Peterson topology is $k_\omega$ by [@HKM Proposition 7.10]. Consequently, the injection from Proposition \[topemb\] yields a topological embedding $\iota : G \to H$, provided one can find $k_\omega$-decompositions $G = \bigcup_n G_n$ and $H = \bigcup_m H_m$ such that each intersection $H_m \cap \iota(G)$ lies in some $\iota(G_n)$. (Indeed, $\iota^{-1}(H_m)$ is closed by continuity of $\iota$, so it is compact once it lies inside some compact set $G_n$, which is equivalent to $H_m \cap \iota(G) \subset \iota(G_n)$.)
For $G$ and $H$ choose $k_\omega$-decompositions making use of Corollary \[maximalbounded2\] and $k_\omega$-decompositions of the fundamental subgroups $G_{\alpha_i} \cong {\mathrm{SL}}_2(\mathbb{R})$ of $G$ and the corresponding subgroups $\prod_{j=1}^{n_i} H_{\alpha_{i,j}} \cong {\mathrm{SL}}_2(\mathbb{R})^{n_i}$ of $H$ into which the $G_{\alpha_i}$ embed diagonally, endowed with their Lie group topology. That is, $$X_1 := X_1^1, \quad X_2 := X_1^1X_2^2, \quad X_3 := X_1^1X_2^2X_3^3, \quad \cdots, X_t := X_1^1\cdots X_t^t, \quad \cdots$$ where each of the $X_t^t$ is the ball of radius $t$ around $1$ of the maximal bounded subgroup $X_t$ endowed with some suitable metric inducing its Lie group topology, with $X \in \{ G, H \}$ and lower index $t$ taken modulo the total number of maximal bounded subgroups.
By construction, each $H_j^j$ intersects $\iota(G)$ in some compact subset of a fundamental subgroup $G_{\alpha_i}$ of $G$ with respect to the Lie group topology. In other words, each $H_j^j \cap \iota(G)$ lies in some $\iota(G_k^k)$. Forming finite products of such sets and using Proposition \[nondistortion\] one concludes that $H_t \cap \iota(G) = (H_1^1\cdots H_t^t) \cap \iota(G)$ lies in some suitable product $G_t = G_1^1\cdots G_t^t$; that is, the injective homomorphism $\iota : G \to H$ indeed is a topological embedding.
Since $\iota$ restricts to maps ${{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{U_+^G} }}:U_+^G\to U_+^H$, ${{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{U_-^G} }}:U_-^G\to U_-^H$, ${{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{T^G} }}:T^G\to T^H$, one can conclude that the diagram $$\begin{tikzcd}
U_+^G\times T^G\times U_-^G \arrow{r}{\varphi^G} \arrow[swap]{d}{{{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{U_+^G} }}\times {{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{T^G} }}\times {{ \left.\kern-\nulldelimiterspace \iota \vphantom{\big|} \right|_{U_-^G} }} }& G \arrow{d}{\iota} \\
U_+^H\times T^H\times U_-^H \arrow{r}{\varphi^H} & H
\end{tikzcd}$$ commutes, which proves that the map $\varphi^G$ is a homeomorphism, since $\varphi^H$ is a homeomorphism by [@HKM Proposition 7.31].
\[topstrong\] Let $G$ be a topological Kac–Moody group endowed with the Kac–Peterson topology. If it is two-spherical or symmetrizable, then the associated twin building with the quotient topology is a strong topological twin building.
The two-spherical case is [@HKM Theorem 1]. In the symmetrizable case it follows by replacing [@HKM Proposition 7.31] with Corollary \[multopen\]; cf. the discussion after [@HKM Theorem 1].
Let $G$ be a topological Kac–Moody group endowed with the Kac–Peterson topology. If it is two-spherical or symmetrizable, then the Bruhat decomposition of a symmetrizable Kac–Moody group is a CW decomposition.
This is a restatement of Proposition \[bruhat decomp is cw decomp\] from the main text. Its proof heavily relies on Corollary \[topstrong\].
Let $G$ be a topological Kac–Moody group endowed with the Kac–Peterson topology. If it is two-spherical or symmetrizable, then the coset model, the group model, and the involution model of the reduced Kac–Moody symmetric space are pairwise homeomorphic with respect to their internal topologies.
The two-spherical case is [@FHHK Proposition 4.19]. In the symmetrizable case it follows from [@FHHK Proposition 4.19] and Corollary \[multopen\].
[GHKW17]{}
Peter Abramenko and Kenneth S. Brown, *Buildings. [T]{}heory and applications*, Springer, New York, 2008.
Pierre-Emmanuel Caprace and Bertrand Rémy, *Groups with a root group datum*, Innov. Incidence Geom. **9** (2009), 5–77. [MR ]{}[2658894]{}
Tom De Medts, Ralf Gramlich, and Max Horn, *Iwasawa decompositions of split [K]{}ac-[M]{}oody groups*, J. Lie Theory **19** (2009), no. 2, 311–337. [MR ]{}[2572132]{}
E. Dynkin, *Classification of the simple [L]{}ie groups*, Rec. Math. \[Mat. Sbornik\] N. S. **18(60)** (1946), 347–352. [MR ]{}[0017286]{}
Walter [Freyn]{}, Tobias [Hartnick]{}, Max [Horn]{}, and Ralf [K[ö]{}hl]{}, *[Kac–Moody symmetric spaces]{}*, [arXiv:1702.08426]{}.
Stanley P. Franklin and Barbara V. Smith Thomas, *A survey of $k_\omega$-spaces*, Topology Proceedings **2** (1977), 111–124.
Helge Glöckner, Ralf Gramlich, and Tobias Hartnick, *Final group topologies, [K]{}ac-[M]{}oody groups and [P]{}ontryagin duality*, Israel J. Math. **177** (2010), 49–101. [MR ]{}[2684413]{}
David [Ghatei]{}, Max [Horn]{}, Ralf [K[ö]{}hl]{}, and Sebastian [Wei[ß]{}]{}, *[Spin covers of maximal compact subgroups of Kac–Moody groups and spin-extended Weyl groups]{}*, J. Group Theory **20** (2017), 401–504.
Roe W. Goodman, *Nilpotent [L]{}ie groups: structure and applications to analysis*, Lecture Notes in Mathematics, Vol. 562, Springer-Verlag, Berlin-New York, 1976. [MR ]{}[0442149]{}
Allen Hatcher, *Algebraic topology*, Cambridge University Press, 2002.
Sigurdur Helgason, *Differential geometry, [L]{}ie groups, and symmetric spaces*, Pure and Applied Mathematics, vol. 80, Academic Press, Inc. \[Harcourt Brace Jovanovich, Publishers\], New York-London, 1978. [MR ]{}[514561]{}
Guntram Hainke, Ralf K[ö]{}hl, and Paul Levy, *Generalized spin representations*, M[ü]{}nster J. Math. **8** (2015), 181–210.
Tobias Hartnick, Ralf K[ö]{}hl, and Andreas Mars, *On topological twin buildings and topological split [K]{}ac–[M]{}oody groups*, Innov. Incidence Geom. **13** (2013), 1–71.
Max Horn, *Tikz styles for dynkin diagrams*, <https://github.com/fingolfin/tikz-dynkin/blob/master/dynkin-diagrams.tex>, Accessed: 2019-02-22.
Edwin Hewitt and Kenneth A. Ross, *Abstract harmonic analysis. [V]{}ol. [I]{}: [S]{}tructure of topological groups. [I]{}ntegration theory, group representations*, Academic Press, Inc., Publishers, New York; Springer-Verlag, Berlin-Göttingen-Heidelberg, 1963.
Dale Husemoller, *Fibre bundles*, third ed., Springer-Verlag, New York, 1994.
Victor G. Kac, *Constructing groups associated to infinite-dimensional [L]{}ie algebras*, Infinite-dimensional groups with applications ([B]{}erkeley, [C]{}alif., 1984), Math. Sci. Res. Inst. Publ., vol. 4, Springer, New York, 1985, pp. 167–216.
[to3em]{}, *Infinite-dimensional [L]{}ie algebras*, third ed., Cambridge University Press, Cambridge, 1990. [MR ]{}[1104219]{}
Victor G. Kac and Dale H. Peterson, *Regular functions on certain infinite-dimensional groups*, pp. 141–166, Birkh[ä]{}user Boston, Boston, MA, 1983.
Linus Kramer, *Loop groups and twin buildings*, Geometriae Dedicata **92** (2001), 145–178.
Shrawan Kumar, *Kac-[M]{}oody groups, their flag varieties and representation theory*, Progress in Mathematics, vol. 204, Birkhäuser Boston, Inc., Boston, MA, 2002. [MR ]{}[1923198]{}
Timoth[é]{}e Marquis, *A fixed point theorem for [Lie]{} groups acting on buildings and applications to [Kac–Moody]{} theory*, Forum math. **27** (2015), 449–466.
Timothée Marquis, *An introduction to [K]{}ac-[M]{}oody groups over fields*, EMS Textbooks in Mathematics, European Mathematical Society (EMS), Zürich, 2018. [MR ]{}[3838421]{}
Timoth[é]{}e Marquis, *Around the [Lie]{} correspondence for complete [Kac–Moody]{} groups and the [Gabber–Kac]{} simplicity*, [ arXiv:1509.01976v3]{}, to appear in [*Annales de l’Institut Fourier*]{}, 2019.
William S. Massey, *Algebraic topology: An introduction*, 4th corrected printing ed., Graduate texts in mathematics 56, Springer-Verlag, 1977.
George Daniel Mostow, *A new proof of [E]{}. [C]{}artan’s theorem on the topology of semi-simple groups*, Bull. Amer. Math. Soc. **55** (1949), 969–980. [MR ]{}[0032656]{}
Richard S. Palais, *On the existence of slices for actions of non-compact [L]{}ie groups*, Ann. of Math. (2) **73** (1961), 295–323.
Richard S. Palais, *When proper maps are closed*, Proc. Amer. Math. Soc. **24** (1970), 835–836.
Claudio Procesi, *Lie groups*, Springer, New York, 2007, An approach through invariants and representations.
Joseph J. Rotman, *An introduction to algebraic topology*, Springer-Verlag, New York, 1988.
Helmut Salzmann, Dieter Betten, Theo Grundhöfer, Hermann Hähl, Rainer Löwen, and Markus Stroppel, *Compact projective planes*, De Gruyter Expositions in Mathematics, vol. 21, Walter de Gruyter & Co., Berlin, 1995, With an introduction to octonion geometry. [MR ]{}[1384300]{}
Norman E. Steenrod, *The topology of fibre bundles*, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1999, Reprint of the 1957 edition, Princeton Paperbacks.
Jacques Tits, *Uniqueness and presentation of [K]{}ac-[M]{}oody groups over fields*, J. Algebra **105** (1987), no. 2, 542–573. [MR ]{}[873684]{}
Mark Wiggerman, *The fundamental group of a real flag manifold*, Indag. Math. **9** (1998), no. 1, 141–153.
|
---
author:
- Danning Li and Mei Huang
title: 'Dynamical holographic QCD model: resembling renormalization group from ultraviolet to infrared'
---
Introduction {#sec:1}
============
Quantum chromodynamics (QCD) is accepted as the fundamental theory of the strong interaction. In the ultraviolet (UV) or weak coupling regime of QCD, the perturbative calculations agree well with experiment. However, in the infrared (IR) regime, the description of QCD vacuum as well as hadron properties and processes in terms of quark and gluon still remains as outstanding challenge in the formulation of QCD as a local quantum field theory.
![Duality between $d$-dimension QFT and $d+1$-dimension gravity as shown in [@Adams:2012th] (Left-hand side). Dynamical holographic QCD model resembles RG from UV to IR (Right-hand side): at UV boundary the dilaton bulk field $\Phi(z)$ and scalar field $X(z)$ are dual to the dimension-4 gluon operator and dimension-3 quark-antiquark operator, which develop condensates at IR. []{data-label="fig:RGflow"}](RG)
In order to derive the low-energy hadron physics and understand the deep-infrared sector of QCD from first principle, various non-perturbative methods have been employed, in particular lattice QCD, Dyson-Schwinger equations (DSEs), and functional renormalization group equations (FRGs). In recent decades, an entirely new method based on the anti-de Sitter/conformal field theory (AdS/CFT) correspondence and the conjecture of the gravity/gauge duality [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj] provides a revolutionary method to tackle the problem of strongly coupled gauge theories. Though the original discovery of holographic duality requires supersymmetry and conformality, the holographic duality has been widely used in investigating hadron physics [@Karch:2006pv; @Csaki:2006ji; @Gherghetta-Kapusta-Kelley; @YLWu], strongly coupled quark gluon plasma and condensed matter. It is widely believed that the duality between the quantum field theory and quantum gravity is an unproven but true fact. In general, holography relates quantum field theory (QFT) in d-dimensions to quantum gravity in (d + 1)-dimensions, with the gravitational description becoming classical when the QFT is strongly-coupled. The extra dimension can be interpreted as an energy scale or renormalization group (RG) flow in the QFT [@Adams:2012th] as shown in Fig.\[fig:RGflow\].
In this talk, we introduce our recently developed dynamical holographic QCD model [@DhQCD], which resembles the renormalization group from ultraviolet (UV) to infrared (IR). The dynamical holographic model is constructed in the graviton-dilaton-scalar framework, where the dilaton background field $\Phi(z)$ and scalar field $X(z)$ are responsible for the gluodynamics and chiral dynamics, respectively. At the UV boundary, the dilaton field $\Phi(z)$ is dual to the dimension-4 gluon operator, and the scalar field $X(z)$ is dual to the dimension-3 quark-antiquark operator. The metric structure at IR is automatically deformed by the nonperturbative gluon condensation and chiral condensation in the vacuum. In Fig.\[fig:RGflow\], we show the dynamical holographic QCD model, which resembles the renormalization group from UV to IR.
Pure gluon system: Graviton-dilaton framework {#sec:2}
=============================================
For the pure gluon system, we construct the quenched dynamical holographic QCD model in the graviton-dilaton framework by introducing one scalar dilaton field $\Phi(z)$ in the bulk. The 5D graviton-dilaton coupled action in the string frame is given below: $$\begin{aligned}
\label{action-graviton-dilaton}
S_G=\frac{1}{16\pi G_5}\int
d^5x\sqrt{g_s}e^{-2\Phi}\left(R_s+4\partial_M\Phi\partial^M\Phi-V^s_G(\Phi)\right).\end{aligned}$$ Where $G_5$ is the 5D Newton constant, $g_s$, $\Phi$ and $V_G^s$ are the 5D metric, the dilaton field and dilaton potential in the string frame, respectively. The metric ansatz is often chosen to be $$\begin{aligned}
\label{metric-ansatz}
ds^2=b_s^2(z)(dz^2+\eta_{\mu\nu}dx^\mu dx^\nu), ~ ~ b_s(z)\equiv e^{A_s(z)}.\end{aligned}$$
To avoid the gauge non-invariant problem and to meet the requirement of gauge/gravity duality, we take the dilaton field in the form of $$\Phi(z)=\mu_G^2z^2\tanh(\mu_{G^2}^4z^2/\mu_G^2).
\label{mixed-dilaton}$$ In this way, the dilaton field at UV behaves $\Phi(z)\overset{z\rightarrow0}{\rightarrow} \mu_{G^2}^4 z^4$, and is dual to the dimension-4 gauge invariant gluon operator ${\rm Tr} G^2 $, while at IR it takes the quadratic form $\Phi(z)\overset{z\rightarrow\infty}{\rightarrow} \mu_G^2 z^2$. By self-consistently solving the Einstein equations, the metric structure will be automatically deformed at IR by the dilaton background field, for details, please refer to [@DhQCD].
![The scalar glueball spectra for the dilaton field $\Phi(z)=\mu_G^2z^2\tanh(\mu_{G^2}^4z^2/\mu_G^2)$ with $\mu_G=\mu_{G^2}=1 {\rm GeV}$. The dots are lattice data taken from [@glueball-lattice].[]{data-label="z4-z2glueball"}](z4-z2glueball.eps)
We assume the glueball can be excited from the QCD vacuum described by the quenched dynamical holographic model, and the 5D action for the scalar glueball $\mathscr{G}(x,z)$ in the string frame takes the form as $$\begin{aligned}
S_{\mathscr{G}}=\int d^5 x \sqrt{g_s}\frac{1}{2}e^{-\Phi}\big[ \partial_M \mathscr{G}\partial^M
\mathscr{G}+M_{\mathscr{G},5}^2 \mathscr{G}^2\big].\end{aligned}$$
The Equation of motion for $\mathscr{G}$ has the form of $$\begin{aligned}
-e^{-(3A_s-\Phi)}\partial_z(e^{3A_s-\Phi}\partial_z{\mathscr}{G}_n)=m_{\mathscr{G},n}^2 \mathscr{G}_n.\end{aligned}$$ After the transformation ${\mathscr}{G}_n \rightarrow e^{-\frac{1}{2}(3A_s-\Phi)}\mathscr{G}_n$, we get the schrodinger like equation of motion for the scalar glueball $$\begin{aligned}
-\mathscr{G}_n^{''}+V_{\mathscr{G}} \mathscr{G}_n=
m_{\mathscr{G},n}^2 \mathscr{G}_n,
\label{EOM-glueball}\end{aligned}$$ with the 5D effective schrodinger potential $$V_{\mathscr{G}}=\frac{3A_s^{''}-\Phi^{''}}{2}+\frac{(3A_s^{'}-\Phi^{'})^2}{4}.
\label{potential-glueball}$$
Then from Eq. (\[EOM-glueball\]), we can solve the scalar glueball spectra and the result is shown in Fig.\[z4-z2glueball\]. It is a surprising result that if one self-consistently solves the metric background under the dynamical dilaton field, it gives the correct ground state and at the same time gives the correct Regge slope.
=6.5 cm =6.5 cm =6.5 cm =6.5 cm -0.05cm 0.15 cm **( $b_s$ )** 6.5 cm **( $V_{Q\bar Q} $ )**\
Following the standard procedure, the heavy quark potential $V_{Q\bar Q}$ and the interquark distance $R_{Q\bar Q}$ can be worked out. We also find the necessary condition for the linear quark potential: There exists a point $z_c$, at which $b_s^{'}(z_c)\rightarrow 0, b_s(z_c)\rightarrow const$, then one can obtain the string tension $$\begin{aligned}
\sigma_s \propto \frac{V_{Q\bar Q}(z_0)}{R_{\bar{q}q}(z_0)}\overset{z_0\rightarrow z_c}{\longrightarrow} \frac{L^2}{2\pi \alpha_p} b_s^2(z_c). \label{stringtension-g}\end{aligned}$$ Where $\alpha_p$ is the 5D string tension. From the left-hand figure in Fig.\[quenched-bs\], we can see that only for the case of positive dilaton background $\Phi=\mu_G^2z^2$ and $\Phi=\mu_G^2z^2\tanh(\mu_{G^2}^4z^2/\mu_G^2)$, the metric has a minimum point $z_c$. Correspondingly, the quark-antiquark potential indeed shows a linear part for positive quadratic dilaton background $\Phi=\mu_G^2z^2$ and for $\Phi=\mu_G^2z^2\tanh(\mu_{G^2}^4z^2/\mu_G^2)$ as shown in right-hand figure in Fig.\[quenched-bs\]. While for the pure ${\rm AdS}_5$ case as well as for the dynamical soft-wall model with negative dilaton background field $\Phi=-\mu_G^2z^2$, there doesn’t exist a $z_c$ where $b_s^{'}(z_c)\rightarrow 0$, and correspondingly the heavy quark potential does not show a linear behavior at large $z$.
Dynamical holographic QCD model for meson spectra {#sec:3}
=================================================
We then add light flavors in terms of meson fields on the gluodynamical background. The total 5D action for the graviton-dilaton-scalar system takes the following form: $$\begin{aligned}
S=S_G + \frac{N_f}{N_c} S_{KKSS},\end{aligned}$$ with $$\begin{aligned}
S_G=&&\frac{1}{16\pi G_5}\int
d^5x\sqrt{g_s}e^{-2\Phi}\big(R+4\partial_M\Phi\partial^M\Phi-V_G(\Phi)\big), \\
S_{KKSS}=&&-\int d^5x
\sqrt{g_s}e^{-\Phi}Tr(|DX|^2+V_X(X^+X, \Phi)+\frac{1}{4g_5^2}(F_L^2+F_R^2)).\end{aligned}$$
=6.5 cm =6.5 cm =6.5 cm =6.5 cm -0.05cm 0.15 cm **( Mod A )** 6.5 cm **( Mod B )**\
=6.5 cm =6.5 cm =6.5 cm =6.5 cm -0.05cm 0.15 cm **( Mod A )** 6.5 cm **( Mod B )**\
In the vacuum, it is assumed that there are both gluon condensate and chiral condensate. The dilaton background field $\Phi$ is supposed to be dual to some kind of gluodynamics in QCD vacuum. We take the dilaton background field $\Phi(z)=\mu_G^2z^2\tanh(\mu_{G^2}^4z^2/\mu_G^2)$. The scalar field $X(z)$ is dual to dimension-3 quark-antiquark operator, and $\chi(z)$ is the vacuum expectation value (VEV) of the scalar field $X(z)$. For detailed analysis please refer to [@DhQCD]. The equations of motion of the vector, axial-vector, scalar and pseudo-scalar mesons take the form of: $$\begin{aligned}
-\rho_n^{''}+V_{\rho} \rho_n&=&m_n^2 \rho_n, \\
-a_n^{''}+ V_a a_n&=& m_n^2 a_n, \\
-s_n^{''}+V_s s_n&=&m_n^2 s_n, \\
-\pi_n''+V_{\pi,\varphi} \pi_n & = & m_n^2(\pi_n-e^{A_s}\chi\varphi_n), \nonumber\\
-\varphi_n''+V_{\varphi} \varphi_n & = & g_5^2 e^{A_s}\chi(\pi_n-e^{A_s}\chi\varphi_n).\end{aligned}$$ with schrodinger like potentials $$\begin{aligned}
V_{\rho}&=& \frac{A_s^{'}-\Phi^{'}}{2}+\frac{(A_s^{'}-\Phi^{'})^2}{4}, \\
V_a&=& \frac{A_s^{'}-\Phi^{'}}{2}+\frac{(A_s^{'}-\Phi^{'})^2}{4}+g_5^2 e^{2A_s} \chi^{2},\\
V_s&=& \frac{3A_s^{''}-\phi^{''}}{2}+\frac{(3A_s^{'}-\phi^{'})^2}{4}
+e^{2A_s}V_{C,\chi\chi}(\chi,\Phi), \\
V_{\pi,\varphi}&=& \frac{3A_s^{''}-\Phi^{''}+2\chi^{''}/\chi-2\chi^{'2}/\chi^2}{2}
+\frac{(3A_s^{'}-\Phi^{'}+2\chi^{'}/\chi)^2}{4}, \\
V_{\varphi}&=& \frac{A_s^{''}-\Phi^{''}}{2}+\frac{(A_s^{'}-\Phi^{'})^2}{4}.\end{aligned}$$
For our numerical calculations, we take two sets of parameters in Table \[parameters\]. The parameters in Mod A has a smaller chiral condensate, which gives a smaller pion decay constant $f_{\pi}=65.7 {\rm MeV}$, and the parameters in Mod B has a larger chiral condensate, which gives a reasonable pion decay constant $f_{\pi}=87.4 {\rm MeV}$.
$G_5/L^3$ $m_q$ (MeV) $\sigma^{1/3}~(MeV)$ $\mu_G=\mu_{G^2}$
------- ----------- ------------- ---------------------- ------------------- -- -- --
Mod A 0.75 8.4 165 0.43
Mod B 0.75 6.2 226 0.43
: Two sets of parameters.[]{data-label="parameters"}
The meson spectra and pion form factor are shown in Fig.\[allmassespic\] and Fig. \[formfactors\]. It is observed that from Fig.\[allmassespic\] that in our graviton-dilaton-scalar system, with two sets of parameters, the generated meson spectra agree well with experimental data. For the pion form factor, it is found that with parameters set A used with a smaller chiral condensate, the produced pion form factor matches the experimental data much better, however, the produced pion decay constant is much smaller than experimental data. With parameters in set B corresponding to a larger chiral condensate, one can produce better result for pion decay constant, but the results on pion form factor are worse.
Discussion and summary {#sec-summary}
======================
In this work, we construct a quenched dynamical holographic QCD (hQCD) model in the graviton-dilaton framework for the pure gluon system, and develop a dynamical hQCD model for the two flavor system in the graviton-dilaton-scalar framework by adding light flavors on the gluodynamical background. The dynamical holographic model resembles the renormalization group from ultraviolet (UV) to infrared (IR). The dilaton background field $\Phi$ and scalar field $X$ are responsible for the gluodynamics and chiral dynamics, respectively. At the UV boundary, the dilaton field is dual to the dimension-4 gluon operator, and the scalar field is dual to the dimension-3 quark-antiquark operator. The metric structure at IR is automatically deformed by the nonperturbative gluon condensation and chiral condensation in the vacuum. The produced scalar glueball spectra in the graviton-dilaton framework agree well with lattice data, and the light-flavor meson spectra generated in the graviton-dilaton-scalar framework are in well agreement with experimental data. Both the chiral symmetry breaking and linear confinement are realized in the dynamical holographic QCD model.
We also give a necessary condition for the existence of linear quark potential from the metric structure, and we show that in the graviton-dilaton framework, a negative quadratic dilaton background field cannot produce the linear quark potential.
The pion form factor is also investigated in the dynamical hQCD model. It is found that with smaller chiral condensate, the produced pion form factor matches the experimental data much better, however, the produced pion decay constant is much smaller than experimental data. With larger chiral condensate, one can produce better result for pion decay constant, but the result on pion form factor is worse.
This work is supported by the NSFC under Grant Nos. 11175251 and 11275213, DFG and NSFC (CRC 110), CAS key project KJCX2-EW-N01, K.C.Wong Education Foundation, and Youth Innovation Promotion Association of CAS.
[99]{} Maldacena J. M., Adv. Theor. Math. Phys. [**2**]{}, 231 (1998). Gubser S. S., Klebanov I. R. and Polyakov A. M., Phys. Lett. B [**428**]{}, 105 (1998). Witten E., Adv. Theor. Math. Phys. [**2**]{}, 253 (1998). A. Karch, E. Katz, D. T. Son and M. A. Stephanov, Phys. Rev. D [**74**]{} (2006) 015005. C. Csaki and M. Reece, JHEP [**0705**]{}, 062 (2007). Gherghetta T., Kapusta J. I. and Kelley T. M., Phys. Rev. D [**79**]{} (2009) 076003. Sui Y. -Q., Wu Y. -L., Xie Z. -F. and Yang Y. -B., Phys. Rev. D [**81**]{} (2010) 014024; Sui Y. -Q., Wu Y. -L. and Yang Y. -B., Phys. Rev. D [**83**]{} (2011) 065030. Adams A., Carr L. D., Schaefer T., Steinberg P. and Thomas J. E., New J. Phys. [**14**]{}, 115009 (2012). Li D. N., Huang M. and Yan Q. -S. , Eur.Phys.J.C(2013) 73:2615; Li D.N, Huang M., arXiv:1303.6929\[hep-ph\], to appear in JHEP.
Meyer H. B., hep-lat/0508002; Lucini B. and Teper M. , JHEP [**0106**]{} (2001) 050; Morningstar C. J. and Peardon M. J., Phys. Rev. D [**60**]{} (1999) 034509; Chen Y., Alexandru A., Dong S. J., Draper T., [*et al.*]{}, Phys. Rev. D [**73**]{} (2006) 014516.
Kwee H. J. and Lebed R. F., JHEP [**0801**]{} (2008) 027; H. J. Kwee and R. F. Lebed, Phys. Rev. D [**77**]{} (2008) 115007.
|
---
abstract: 'We present a comprehensive view of the relations among several privacy notions: differential privacy (DP) [@Dwork2006], Bayesian differential privacy (BDP) [@yang2015bayesian], semantic privacy (SP) [@kasiviswanathan2014semantics], and membership privacy (MP) [@Li-CCS2013]. The results are organized into two parts. In part one, we extend the notion of privacy (SP) to Bayesian semantic privacy (BSP) and show its essential equivalence with Bayesian differential privacy (BDP) in the quantitative sense. We prove the relations between BDP, BSP, and SP as follows: , and . In addition, we obtain a minor result , which improves the result of Kasiviswanathan and Smith [@kasiviswanathan2014semantics] stating $\epsilon$-DP $\Longleftarrow$ for $\epsilon \leq 1.35$. In part two, we establish the relations between BDP and MP. First, $\epsilon$-BDP $\Longrightarrow$ $\epsilon$-MP. Second, for a family of distributions that are downward scalable in the sense of Li *et al*. [@Li-CCS2013], it is shown that'
author:
-
title: Relations Among Different Privacy Notions
---
Differential privacy, Bayesian differential privacy, membership privacy.
Introduction
============
**Differential privacy (DP).** Differential privacy by Dwork *et al.* [@Dwork2006; @dwork2006calibrating] is a robust privacy standard that has been successfully applied to a range of data analysis tasks, since it provides a rigorous foundation for defining and preserving privacy. Differential privacy has received considerable attention in the literature [@andres2013geo; @zhang2016privtree; @blocki2016differentially; @xiao2015protecting; @loucost; @shokri2015privacy; @KamalikaChaudhuriAllerton17; @KamalikaChaudhuriSIGMOD17; @shokri2017membership; @phan2017adaptive]. Apple has incorporated differential privacy into its mobile operating system iOS 10 [@tang2017privacy]. Google has implemented a differentially private tool called RAPPOR in the Chrome browser to collect information about clients [@erlingsson2014rappor]. A randomized algorithm $Y$ satisfies $\epsilon$-differentially privacy if for all adjacent databases $x$, $x'$ and any event $E$, it holds that where ${\mathbb{P}}[\cdot]$ denotes the probability throughout this paper. Intuitively, under differential privacy, an adversary given access to the output do not have much confidence to determine whether it was sampled from the probability distribution generated by the algorithm when the database is $x$ or when the database is $x'$.
**Bayesian differential privacy (BDP).** Yang *et al.* [@yang2015bayesian] introduce the notion of Bayesian differential privacy as follows. Bayesian differential privacy broadens the application scenarios of differential privacy when data records have dependencies. For a database $x$ with $n$ tuples, let $i \in \{1,2,\ldots,n\}$ be a tuple index in the database and $\mathcal{S} \subseteq \{1,2,\ldots,n\}\setminus {i}$ be a tuple index set. An adversary denoted by $A(i, \mathcal{S}) $ knows the values of all tuples in $\mathcal{S}$ (denoted by $x_\mathcal{S}$) and attempts to attack the value of tuple $i$ (denoted by $x_i$). For a randomized perturbation mechanism $Y = {\mathbb{P}}[y \in \mathcal{Y} {\boldsymbol{\mid}}x]$ on database $x$, the Bayesian differential privacy leakage (BDPL) of $Y$ with respect to the adversary $A(i, \mathcal{S}) $ is $\texttt{BDPL}_A(Y) = \text{sup}_{x_i, x_i', x_\mathcal{S}, \mathcal{Y}} \ln \frac{{\mathbb{P}}[y \in \mathcal{Y} {\boldsymbol{\mid}}x_i, x_{\mathcal{S}}]}{{\mathbb{P}}[y \in \mathcal{Y} {\boldsymbol{\mid}}x_i', x_\mathcal{S}]} $. The mechanism $Y$ satisfies $\epsilon$-Bayesian differential privacy if $\text{sup}_A \texttt{BDPL}_A(Y) \leq \epsilon$.
**Semantic privacy (SP).** Kasiviswanathan and Smith [@kasiviswanathan2014semantics] propose a Bayesian formulation of semantic privacy, inspired by the following interpretation of differential privacy explained in [@Dwork2006]: *Regardless of external knowledge, an adversary with access to the sanitized database draws the same conclusions whether or not any individual data is included in the original database*. The phrases “external knowledge” and “drawing conclusions” are formulated as follows in [@kasiviswanathan2014semantics]. The external knowledge is modeled by a prior probability distribution $b$ on $\mathcal{D}^n$, where $b$ is short for “belief”, and databases are assumed to be vectors in $\mathcal{D}^n$ for some domain $\mathcal{D}$. Conclusions are captured via the corresponding posterior distribution: given a transcript $y$, the adversary updates his belief $b$ about the database $x$ using Bayes’ rule to obtain a posterior $\overline{b}$: $
\overline{b}[x|y] =\frac{{{\mathbb{P}}\left[{Y(x)=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z)=y}\right]}b[z]}.$
For the database $x$, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] further define $x_{-i}$ to be the same vector except that the record at position $i$ has been replaced by some fixed, default value $\perp$ in $\mathcal{D}$.
Kasiviswanathan and Smith [@kasiviswanathan2014semantics] define $n + 1$ related games, numbered $0$ through $n$. In Game $0$, the adversary interacts with $Y(x)$. This is the interaction that actually takes place between the adversary and the randomized mechanism $Y$. Hence, the distribution $\overline{b}_0$ is just the distribution $\overline{b}$ as defined in (\[bbar-uncorrelated\]); i.e., $
\overline{b}_0[x|y] =\overline{b}[x|y] =\frac{{{\mathbb{P}}\left[{Y(x)=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z)=y}\right]}b[z]}.$
In Game $i$ (for $1 \leq i \leq n$), the adversary interacts with $Y(x_{-i})$. Game $i$ describes the hypothetical scenario where person $i$’s record is not used. In Game $i$ (for $1 \leq i \leq n$), given a transcript $y$, the adversary updates his belief $b$ about database $x$ again using Bayes’ rule to obtain a posterior $\overline{b}_i$ as follows: $
\overline{b}_i[x|y] =\frac{{{\mathbb{P}}\left[{Y(x_{-i})=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z_{-i})=y}\right]}b[z]}.$
Given a transcript $y$, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] say that privacy has been breached if the adversary would draw different conclusions about the world and, in particular, about a person $i$, depending on whether or not $i$’s data was used. To this end, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] formally define $\epsilon$-semantic privacy below, where the statistical difference $\texttt{SD}(X,Y)$ between random variables $X$ and $Y$ on the same discrete space $D$ is defined by $\texttt{SD}(X,Y)=\max_{S\subseteq D}\big|{{\mathbb{P}}\left[{X\in S}\right]}-{{\mathbb{P}}\left[{Y\in S}\right]}\big|.$ A randomized mechanism $Y$ is said to be $\epsilon$-semantically private if for all belief distributions $b$ on $\mathcal{D}^n$, for all possible transcripts $y$, and for all $i = 1, \ldots, n$, it holds that $
\textnormal{\texttt{SD}}(\overline{b}_0[\cdot|y],
\,\overline{b}_i[\cdot|y]) \leq \epsilon.$
**Membership privacy (MP).** Li *et al.* [@Li-CCS2013] propose membership privacy (MP) in consideration of the adversary’s prior beliefs. Let the adversary’s prior beliefs about the dataset be captured by a distribution $\mathcal{D}$. From the adversary’s point of view, the dataset is a random variable drawn according to the distribution $\mathcal{D}$. With $\overline{x_i}$ denoting the event that record $x_i$ is not in the database, Li *et al*. [@Li-CCS2013] define membership privacy as follows. A mechanism ${Y}$ achieves ${\epsilon}$-membership privacy under a family $\mathbb{D}$ of distributions, i.e., $\langle\mathbb{D},\epsilon\rangle$-MP, if and only if for any distribution $\mathcal{D}\in\mathbb{D}$ and for any record $x_i$, any possible set $\mathcal{Y}$ for the output, we have[^1] $ \mathbb{P}_{\mathcal{D},Y}[x_i{\boldsymbol{\mid}}\mathcal{Y}] \leq e^{\epsilon}\mathbb{P}_{\mathcal{D}}[x_i] $ and $ \mathbb{P}_{\mathcal{D},Y}[\overline{x_i}{\boldsymbol{\mid}}\mathcal{Y}] \geq e^{-\epsilon}\mathbb{P}_{\mathcal{D}}[\overline{x_i}]. $
The rest of the paper is organized as follows. Section \[sec:main:res\] presents the results on the relations among several privacy notions: differential privacy (DP), Bayesian differential privacy (BDP), semantic privacy (SP), and membership privacy (MP). We elaborate their proofs in Sections \[secprofco\]. Section \[related\] surveys related work, and Section \[sec:Conclusion\] concludes the paper.
The Results {#sec:main:res}
===========
Kasiviswanathan and Smith [@kasiviswanathan2014semantics] introduce semantic privacy (SP) and show its essential equivalence with differential privacy (DP) in the quantitative sense (the notion of essential equivalence means $\epsilon$-DP $\Longleftarrow$ $f(\epsilon)$-SP and $\epsilon$-DP $\Longrightarrow$ $g(\epsilon)$-SP for some functions $f$ and $g$). We extend their notion to Bayesian semantic privacy (BSP) and show its essential equivalence with Bayesian differential privacy (BDP) also in the quantitative sense. We prove the relations between BDP, BSP, and SP as follows:
- $\epsilon$-BDP $\Longleftarrow$ $\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)$-BSP.
- $\epsilon$-BDP $\Longrightarrow$ $(e^{2\epsilon}-1)$-BSP $\Longrightarrow$ $(e^{2\epsilon}-1)$-SP.
We prove results (i) and (ii) in Section \[sec1\], where we also obtain a minor result , which improves the result of Kasiviswanathan and Smith [@kasiviswanathan2014semantics] stating for $\epsilon \leq 1.35$.
Li *et al*. [@Li-CCS2013] propose membership privacy (MP), which is applicable to Bayesian data, in contrast to DP. However, no general algorithm has been proposed for this framework. We present the following relations between BDP and MP:
- $\epsilon$-BDP $\Longrightarrow$ $\epsilon$-MP.
- For a family of distributions that are downward scalable in the sense of Li *et al*. [@Li-CCS2013], $\epsilon$-BDP $\Longleftarrow$ $\epsilon$-MP (See [@Li-CCS2013] for the meaning of “downward scalable” distributions).
We prove results (iii) and (iv) in Section \[sec2\].
Proofs {#secprofco}
======
Relations between our Bayesian differential privacy and Kasiviswanathan and Smith’s semantic privacy [@kasiviswanathan2014semantics] {#sec1}
------------------------------------------------------------------------------------------------------------------------------------
We extend the work of Kasiviswanathan and Smith [@kasiviswanathan2014semantics] on semantic privacy to tackle the case of correlated tuples. Specifically, we will present [Bayesian]{} semantic privacy and prove that the notions of [Bayesian]{} differential privacy and [Bayesian]{} semantic privacy are essentially (i.e., quantitatively) equivalent (see Theorem \[thmmain\] below). Our result resembles [@kasiviswanathan2014semantics Theorem 2.2], which shows that differential privacy and semantic privacy are essentially equivalent.
\[thmmain\] $\epsilon$-Bayesian differential privacy implies $(e^{2\epsilon}-1)$-Bayesian semantic privacy, and is implied by $\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)$-Bayesian semantic privacy.
\[thmmain1\] $\epsilon$-Differential privacy implies $(e^{2\epsilon}-1)$-semantic privacy, and is implied by $\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)$-semantic privacy.
Theorem \[thmmain\] is one of our novel results. The first part of Theorem \[thmmain1\] is obtained by Kasiviswanathan and Smith [@kasiviswanathan2014semantics]. The second part of Theorem \[thmmain1\] improves the corresponding result of Kasiviswanathan and Smith [@kasiviswanathan2014semantics], which states that $\epsilon$-differential privacy is implied by $\epsilon/6$-semantic privacy for $\epsilon \leq 1.35$. The improvement can be seen from $\frac{1}{2}-\frac{1}{e^{\epsilon}+1} > \epsilon/6$ for $\epsilon \leq 1.35$.
The rest of the discussion is organized as follows. We review semantic privacy and define Bayesian semantic privacy in Section \[spdsp\]. In Section \[recddp\], we recall Bayesian differential privacy. Finally, we prove the above Theorem \[thmmain\] in Section \[secrel\]. The proof of Theorem \[thmmain1\] is similar to that of Theorem \[thmmain\].
### Reviewing semantic privacy and defining Bayesian semantic privacy {#spdsp}
In this section, we first review semantic privacy from Kasiviswanathan and Smith [@kasiviswanathan2014semantics], before presenting Bayesian semantic privacy, which extends the notion of semantic privacy to address correlated tuples.
**A review of Kasiviswanathan and Smith [@kasiviswanathan2014semantics] for semantic privacy:**
Kasiviswanathan and Smith [@kasiviswanathan2014semantics] propose a Bayesian formulation of semantic privacy, inspired by the following interpretation of differential privacy explained in [@Dwork2006]: *Regardless of external knowledge, an adversary with access to the sanitized database draws the same conclusions whether or not any individual data is included in the original database*. The phrases “external knowledge” and “drawing conclusions” are formulated as follows in [@kasiviswanathan2014semantics]. The external knowledge is modeled by a prior probability distribution $b$ on $\mathcal{D}^n$, where $b$ is short for “belief,” and databases are assumed to be vectors in $\mathcal{D}^n$ for some domain $\mathcal{D}$. Conclusions are captured via the corresponding posterior distribution: given a transcript $y$, the adversary updates his belief $b$ about the database $x$ using Bayes’ rule to obtain a posterior $\overline{b}$:[^2] $$\begin{aligned}
\overline{b}[x|y] & =\frac{{{\mathbb{P}}\left[{Y(x)=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z)=y}\right]}b[z]}.
\label{bbar-uncorrelated}\end{aligned}$$
For the database $x$, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] further define $x_{-i}$ to be the same vector except that position $i$ has been replaced by some fixed, default value in $\mathcal{D}$. Any valid value in $\mathcal{D}$ will do for the default value. In addition, the default value can be understood as a special value $\perp$ (e.g., “no data”); see [@kasiviswanathan2014semantics Page 3–Footnote 2] for details. We will use $\perp$ whenever it is necessary to explicitly write out the default value.
Kasiviswanathan and Smith [@kasiviswanathan2014semantics] define $n + 1$ related games, numbered $0$ through $n$. In Game $0$, the adversary interacts with $Y(x)$. This is the interaction that actually takes place between the adversary and the randomized mechanism $Y$. Hence, the distribution $\overline{b}_0$ is just the distribution $\overline{b}$ as defined in (\[bbar-uncorrelated\]); i.e., $$\begin{aligned}
\overline{b}_0[x|y] & =\overline{b}[x|y] =\frac{{{\mathbb{P}}\left[{Y(x)=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z)=y}\right]}b[z]}.
\label{bbar0-uncorrelated}\end{aligned}$$
In Game $i$ (for $1 \leq i \leq n$), the adversary interacts with $Y(x_{-i})$. Game $i$ describes the hypothetical scenario where person $i$’s record is not used. In Game $i$ (for $1 \leq i \leq n$), given a transcript $y$, the adversary updates his belief $b$ about database $x$ again using Bayes’ rule to obtain a posterior $\overline{b}_i$ as follows: $$\begin{aligned}
\overline{b}_i[x|y] & =\frac{{{\mathbb{P}}\left[{Y(x_{-i})=y}\right]}b[x]}{\sum_{z}{{\mathbb{P}}\left[{Y(z_{-i})=y}\right]}b[z]}.
\label{bbari-uncorrelated}\end{aligned}$$
Given a transcript $y$, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] say that privacy has been breached if the adversary would draw different conclusions about the world and, in particular, about a person $i$, depending on whether or not $i$’s data was used. To this end, Kasiviswanathan and Smith [@kasiviswanathan2014semantics] formally define $\epsilon$-semantic privacy below, where the statistical difference $\texttt{SD}(X,Y)$ between probability distributions (or random variables) $X$ and $Y$ on a discrete space $D$ is defined by $$\texttt{SD}(X,Y)=\max_{S\subseteq D}\big|{{\mathbb{P}}\left[{X\in S}\right]}-{{\mathbb{P}}\left[{Y\in S}\right]}\big|.$$
\[defSP\] A randomized mechanism $Y$ is said to be $\epsilon$-semantically private if for all belief distributions $b$ on $\mathcal{D}^n$, for all possible transcripts $y$, and for all $i = 1, \ldots, n$: $$\begin{aligned}
\textnormal{\texttt{SD}}(\overline{b}_0[\cdot|y],
\,\overline{b}_i[\cdot|y]) \leq \epsilon.\label{bbar0i-uncorrelated}\end{aligned}$$
From (\[bbari-uncorrelated\]) and (\[bbar0i-uncorrelated\]), the above definition of $\epsilon$-semantic privacy requires the use of $x_{-i}$, where $x_{-i}$ is obtained after we replace position $i$ at $x$ by the default value $\perp$. If the tuples are correlated, changing position $i$ at $x$ might also result in changing other positions at $x$. Hence, $\epsilon$-semantic privacy may not work well under correlated tuples. Given this, we next extend $\epsilon$-semantic privacy to address correlated tuples and present $\epsilon$-Bayesian semantic privacy.
**Extending semantic privacy to Bayesian semantic privacy to address correlated tuples:**
As will become clear, our extension of semantic privacy to Bayesian semantic privacy is similar to the extension of differential privacy to Bayesian differential privacy.
We let a statistical database be $[X_1,X_2, \ldots, X_n]$, where $X_j$ for each $j\in \{1,2,\ldots,n\}$ is a *random variable*. We also let $\mathcal{N}$ be $\{1,2,\ldots,n\}$. Then we consider the databases $x$ and $z$ used in (\[bbar-uncorrelated\])–(\[bbari-uncorrelated\]) above to be $$\begin{aligned}
x & = [X_1=x_1,X_2=x_2,
\ldots,X_n=x_n] = [X_j=x_{j}:j \in \mathcal{N}], \label{xfull}\end{aligned}$$ and $$\begin{aligned}
z & = [X_1=z_1,X_2=z_2,
\ldots,X_n=z_n] = [X_j=z_{j}:j \in \mathcal{N}]. \label{zfull}\end{aligned}$$
When the data tuples are correlated, the adversary may gain more advantage in inferring $x_i$ by using random variables $X_j|_{j\in \mathcal{S}}$’s instantiations $x_j|_{j\in \mathcal{S}}$, and random variables $X_j|_{j\in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}}$ for computation instead of using instantiations $x_j|_{j\in \mathcal{N}\setminus\{i\}}$ only, where $\mathcal{S} \subseteq \mathcal{N}\setminus\{i\}$ (note that $\mathcal{S}$ can be an arbitrary subset of $ \mathcal{N}\setminus\{i\}$). For notation convenience, we define $x_{i+\mathcal{S}}$ and $z_{i+\mathcal{S}}$ by $$\begin{aligned}
x_{i+\mathcal{S}} & = [X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}], \label{xiplusS}\end{aligned}$$ and $$\begin{aligned}
z_{i+\mathcal{S}} & = [X_i=z_i ,~X_j=z_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}].\label{ziplusS}\end{aligned}$$ From (\[xfull\])–(\[ziplusS\]), if $\mathcal{S} = \mathcal{N}\setminus\{i\}$, then $x_{i+\mathcal{S}}$ and $z_{i+\mathcal{S}}$ reduce to databases $x$ and $z$, respectively.
Similar to the previous subsection, here we also let the adversary play $n + 1$ related games with the randomized mechanism $Y$, and define $\overline{b},\overline{b}_0,\overline{b}_i|_{i=1,\ldots,n}$ as detailed below. In Game $0$, the adversary interacts with $Y(x_{i+\mathcal{S}})$. We generalize $x$ and $z$ in (\[bbar0-uncorrelated\]) to $x_{i+\mathcal{S}}$ and $z_{i+\mathcal{S}}$, so that (\[bbar0-uncorrelated\]) becomes $$\begin{aligned}
\overline{b}_0[x_{i+\mathcal{S}}|y] & =\overline{b}[x_{i+\mathcal{S}}|y] =\frac{{{\mathbb{P}}\left[{Y(x_{i+\mathcal{S}})=y}\right]}b[x_{i+\mathcal{S}}]}{\sum_{z_{i+\mathcal{S}}}{{\mathbb{P}}\left[{Y(z_{i+\mathcal{S}})=y}\right]}b[z_{i+\mathcal{S}}]}.
\label{bbar0-correlated}\end{aligned}$$ For clarity, we explain the beliefs in (\[bbar0-correlated\]). From (\[xiplusS\]) and (\[ziplusS\]), $b[x_{i+\mathcal{S}}]$ and $b[z_{i+\mathcal{S}}]$ in (\[bbar0-correlated\]) are given by $$\begin{aligned}
b[x_{i+\mathcal{S}}] & = b[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}] \nonumber \\ & = b[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S}], \label{bxiplusS}\end{aligned}$$ and $$\begin{aligned}
b[z_{i+\mathcal{S}}] & = b[X_i=z_i ,~X_j=z_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}] \nonumber \\ & =b [X_i=z_i ,~X_j=z_{j}:j \in \mathcal{S}].\label{bziplusS}\end{aligned}$$ Similar to (\[bxiplusS\]), from (\[xiplusS\]), $\overline{b}_0[x_{i+\mathcal{S}}|y]$ is given by $$\begin{aligned}
& \overline{b}_0[x_{i+\mathcal{S}}|y] \nonumber \\& = \overline{b}_0[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}|y] \nonumber \\ & = \overline{b}_0[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S}|y]. \label{b0xiplusS}\end{aligned}$$
In Game $i$ (for $1 \leq i \leq n$), we change position $i$ at $x_{i+\mathcal{S}}$ to the default value $\perp$ to obtain $x_{-i+\mathcal{S}}$ defined below; specifically, recalling $x_{i+\mathcal{S}}$ given by (\[xiplusS\]), we set $x_{-i+\mathcal{S}}$ by $$\begin{aligned}
x_{-i+\mathcal{S}} & = [X_i=\perp ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}].\end{aligned}$$ Similarly, we change position $i$ at $z_{i+\mathcal{S}}$ to the default value $\perp$ to obtain $z_{-i+\mathcal{S}}$ defined below; specifically, recalling $z_{i+\mathcal{S}}$ given by (\[ziplusS\]), we set $z_{-i+\mathcal{S}}$ by $$\begin{aligned}
z_{-i+\mathcal{S}} & = [X_i=\perp ,~X_j=z_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}].\end{aligned}$$ As $x_{i+\mathcal{S}}$ and $z_{i+\mathcal{S}}$ generalize $x$ and $z$ in (\[bbari-uncorrelated\]), clearly $x_{-i+\mathcal{S}}$ and $z_{-i+\mathcal{S}}$ also generalize $x_{-i}$ and $z_{-i}$ in (\[bbari-uncorrelated\]). In Game $i$ (for $1 \leq i \leq n$), the adversary interacts with $Y(x_{-i+\mathcal{S}})$. Then replacing $x$, $z$, $x_{-i}$ and $z_{-i}$ in (\[bbari-uncorrelated\]) by $x_{i+\mathcal{S}}$, $z_{i+\mathcal{S}}$, $x_{-i+\mathcal{S}}$ and $z_{-i+\mathcal{S}}$, respectively, we obtain $$\begin{aligned}
\overline{b}_i[x_{i+\mathcal{S}}|y] & =\frac{{{\mathbb{P}}\left[{Y(x_{-i+\mathcal{S}})=y}\right]}b[x_{i+\mathcal{S}}]}{\sum_{z_{i+\mathcal{S}}}{{\mathbb{P}}\left[{Y(z_{-i+\mathcal{S}})=y}\right]}b[z_{i+\mathcal{S}}]}.
\label{bi}\end{aligned}$$ The beliefs $b[x_{i+\mathcal{S}}]$ and $b[z_{i+\mathcal{S}}]$ in (\[bi\]) are already interpreted as (\[bxiplusS\]) and (\[bziplusS\]). For clarity, we further explain $\overline{b}_i[x_{i+\mathcal{S}}|y]$ in (\[bi\]). Similar to (\[b0xiplusS\]), from (\[xiplusS\]), $\overline{b}_i[x_{i+\mathcal{S}}|y]$ is given by $$\begin{aligned}
&\overline{b}_i[x_{i+\mathcal{S}}|y] \nonumber \\ & = \overline{b}_i[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}|y] \nonumber \\ & = \overline{b}_i[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S}|y]. \label{bixiplusS}\end{aligned}$$
With the above notation, we define $\epsilon$-Bayesian semantical privacy below, in a way similar to that of $\epsilon$-semantical privacy in Definition \[defSP\].
\[defBSP\] A randomized mechanism $Y$ is said to have $\epsilon$-Bayesian semantical privacy if for all belief distributions $b$ on $\mathcal{D}^n$, for all possible transcripts $y$, for all $i = 1, \ldots, n$, and for all $x_{i+\mathcal{S}}$ and $z_{i+\mathcal{S}}$ defined in (\[xiplusS\]) and (\[ziplusS\]) with $\mathcal{S}\subseteq \mathcal{N}\setminus\{i\}$: $$\begin{aligned}
\textnormal{\texttt{SD}}(\overline{b}_0[x_{i+\mathcal{S}}|y],
\,\overline{b}_i[x_{i+\mathcal{S}}|y]) \leq \epsilon. \label{bbar0i-correlated}\end{aligned}$$
To understand the beliefs $\overline{b}_0[x_{i+\mathcal{S}}|y]$ and $\overline{b}_i[x_{i+\mathcal{S}}|y]$ in (\[bbar0i-correlated\]), we use their interpretations in (\[b0xiplusS\]) and (\[bixiplusS\]). In Definition \[defBSP\] for $\epsilon$-Bayesian semantical privacy, we consider all possible $\mathcal{S}\subseteq \mathcal{N}\setminus\{i\}$. In the hypothetical scenario where we consider $\mathcal{S}$ only as $ \mathcal{N}\setminus\{i\}$ in Definition \[defBSP\], Definition \[defBSP\] would reduce to Definition \[defSP\] for $\epsilon$-semantical privacy.
### Recalling Bayesian differential privacy {#recddp}
In this section, we recall Bayesian differential privacy and express its definition using some new notation.
With $x_{i+\mathcal{S}}$ defined in (\[xiplusS\]) (i.e., $x_{i+\mathcal{S}} = [X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}]$), for notation convenience, we further define $x_{i+\mathcal{S}}'$ by $$\begin{aligned}
x_{i+\mathcal{S}}' = [X_i=x_i' ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}]. \label{xiplusSpr}\end{aligned}$$ Note that the only difference between $x_{i+\mathcal{S}}$ and $x_{i+\mathcal{S}}'$ is that the former has $X_i=x_i$, while the latter enforces $X_i=x_i'$. Then $\epsilon$-Bayesian differential privacy means $$\begin{aligned}
\frac{{{\mathbb{P}}\left[{Y(x_{i+\mathcal{S}})=y}\right]}}{{{\mathbb{P}}\left[{Y(x_{i+\mathcal{S}}')=y}\right]}}
\leq e^{\epsilon}. \label{ddprecall}\end{aligned}$$
### Proving Theorem \[thmmain\] on the relations between Bayesian differential privacy and Bayesian semantic privacy {#secrel}
Our Theorem \[thmmain\] restated below presents the relations between Bayesian differential privacy and Bayesian semantic privacy.\
**Theorem \[thmmain\] (Restated). ** [*$\epsilon$-Bayesian differential privacy implies $(e^{2\epsilon}-1)$-Bayesian semantic privacy, and is implied by $\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)$-Bayesian semantic privacy.\
*]{}
Theorem \[thmmain\] shows that the notions of Bayesian differential privacy and Bayesian semantic privacy are essentially equivalent (of course, the parameters should be set appropriately). The proof of Theorem \[thmmain\] below is just an extension of the reasoning by Kasiviswanathan and Smith [@kasiviswanathan2014semantics].\
**Proof of Theorem \[thmmain\]. ** We show Theorem \[thmmain\] in two parts below. We will use the following definition of point-wise $(\epsilon, 0)$-indistinguishability from [@kasiviswanathan2014semantics Definition 3.2]: Two discrete random variables $X$ and $Y$ are point-wise $(\epsilon, 0)$-indistinguishable if it holds for a drawn from either $X$ or $Y$ that $e^{-\epsilon}{{\mathbb{P}}\left[{Y=a}\right]}\leq {{\mathbb{P}}\left[{X=a}\right]} \leq e^{\epsilon}{{\mathbb{P}}\left[{Y=a}\right]}$.
*Proving $\epsilon$-Bayesian differential privacy $\Longrightarrow$ $(e^{2\epsilon}-1)$-Bayesian semantic privacy: * To prove this part, we consider any database $x \in \mathcal{D}^n$. Let $Y$ be an $\epsilon/2$-Bayesian differentially private algorithm. Consider any belief distribution $b$. Let the posterior distributions $\overline{b}_0[x_{i+\mathcal{S}}|y]$ and $\overline{b}_i[x_{i+\mathcal{S}}|y]$ for some fixed $i$, $\mathcal{S}$ and $y$ be defined in (\[bbar0-correlated\]) and (\[bi\]). From (\[ddprecall\]), $\epsilon$-Bayesian differential privacy implies that for every $z_{i+\mathcal{S}}$, $$\begin{aligned}
e^{-\epsilon} {{\mathbb{P}}\left[{Y(z_{-i+\mathcal{S}})=y}\right]}\leq {{\mathbb{P}}\left[{Y(z_{i+\mathcal{S}})=y}\right]} \leq e^{\epsilon} {{\mathbb{P}}\left[{Y(z_{-i+\mathcal{S}})=y}\right]}.\nonumber\end{aligned}$$ These inequalities imply that the ratio of $\overline{b}_0[x_{i+\mathcal{S}}|y]$ and $\overline{b}_i[x_{i+\mathcal{S}}|y]$ (defined in (\[bbar0-correlated\]) and (\[bi\])) is within $e^{\pm2\epsilon}$. Since these inequalities hold for every $x_{i+\mathcal{S}}$, we get: $$\begin{aligned}
e^{-2\epsilon}\overline{b}_i[x_{i+\mathcal{S}}|y]\leq \overline{b}_0[x_{i+\mathcal{S}}|y] \leq e^{2\epsilon}\overline{b}_i[x_{i+\mathcal{S}}|y],
~\forall x_{i+\mathcal{S}}\nonumber.\end{aligned}$$ This implies that the random variables $\overline{b}_0[x_{i+\mathcal{S}}|y]$ and $\overline{b}_i[x_{i+\mathcal{S}}|y]$ are point-wise $(2\epsilon, 0)$-indistinguishable. Applying [@kasiviswanathan2014semantics Lemma 3.3-Property 5], we obtain $\textrm{\texttt{SD}}(\overline{b}_0[x_{i+\mathcal{S}}|y],
\,\overline{b}_i[x_{i+\mathcal{S}}|y]) \leq (e^{2\epsilon}-1)$. Repeating the above arguments for every belief distribution, for every $i$, and for every $y$, we thus show that the mechanism $Y$ is $(e^{2\epsilon}-1)$-Bayesian semantic private.
*Proving $\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)$-Bayesian semantic privacy $\Longrightarrow$ : *To prove this part, we consider a belief distribution $b$ which is uniform over $$x_{i+\mathcal{S}} = [X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}]$$ and $$x_{i+\mathcal{S}}' = [X_i=x_i' ,~X_j=x_{j}:j \in \mathcal{S},~X_j:j \in \mathcal{N}\setminus\{i\}\setminus\mathcal{S}];$$ i.e., $$\begin{aligned}
b[x_{i+\mathcal{S}}] & = b[X_i=x_i ,~X_j=x_{j}:j \in \mathcal{S}] = \frac{1}{2}\nonumber\end{aligned}$$ and $$\begin{aligned}
b[x_{i+\mathcal{S}}'] & = b[X_i=x_i' ,~X_j=x_{j}:j \in \mathcal{S}] = \frac{1}{2}.\nonumber\end{aligned}$$ Fix a transcript $y$. The distribution $\overline{b}_i[\cdot|y]$ will be uniform over $x_{i+\mathcal{S}}$ and $x_{i+\mathcal{S}}'$ since they induce the same distribution on transcripts in Game $i$. This means that $\overline{b}_0[\cdot|y]$ will assign probabilities in the interval $[\frac{1}{2}-\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big),\frac{1}{2}+\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)]$ to each of $x_{i+\mathcal{S}}$ and $x_{i+\mathcal{S}}'$ (by Definition \[defSP\]). Working through Bayes’ rule shows that (note that $b[x_{i+\mathcal{S}}] = b[x_{i+\mathcal{S}}']$) $$\begin{aligned}
&\frac{{{\mathbb{P}}\left[{Y(x_{i+\mathcal{S}})=y}\right]}}{{{\mathbb{P}}\left[{Y(x_{i+\mathcal{S}}')=y}\right]}}
\nonumber \\ &=\frac{\overline{b}_0[x_{i+\mathcal{S}}|y]}{\overline{b}_0[x_{i+\mathcal{S}}'|y]}
\leq \frac{\frac{1}{2}+\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)}{\frac{1}{2}-\big(\frac{1}{2}-\frac{1}{e^{\epsilon}+1}\big)} = e^{\epsilon}. \label{boundd}\end{aligned}$$ Since the bound in (\[boundd\]) holds for every $y$, $Y(x_{i+\mathcal{S}})$ and $Y(x_{i+\mathcal{S}}')$ are point-wise $(\epsilon, 0)$-indistinguishable. From [@kasiviswanathan2014semantics Lemma 3.3-Property 5], $Y(x_{i+\mathcal{S}})$ and $Y(x_{i+\mathcal{S}}')$ are $(\epsilon, 0)$-indistinguishable. Since this relation holds for every pair of $x_{i+\mathcal{S}}$ and $x_{i+\mathcal{S}}'$, the mechanism $Y$ is $\epsilon$-Bayesian differentially private.
Relations between Bayesian differential privacy and membership privacy {#sec2}
----------------------------------------------------------------------
The adversary may have prior beliefs about what the dataset is; this is captured by a distribution $\mathcal{D}$. From the adversary’s point of view, the dataset is a random variable drawn according to the distribution $\mathcal{D}$. With $\overline{x_i}$ denoting the event that record $x_i$ is not in the database, Li *et al*. [@Li-CCS2013] define membership privacy as follows, where we reuse some notation of Li *et al*. [@Li-CCS2013].
\[MPdef\] A mechanism ${Y}$ achieves under a family $\mathbb{D}$ of distributions, i.e., , if and only if for any distribution $\mathcal{D}\in\mathbb{D}$ and for any record $x_i$, any possible set $\mathcal{Y}$ for the output, we have[^3] $$\begin{aligned}
\mathbb{P}_{\mathcal{D},Y}[x_i{\boldsymbol{\mid}}\mathcal{Y}] & \leq e^{\epsilon}\mathbb{P}_{\mathcal{D}}[x_i] \label{MembershipPrivacy1}\end{aligned}$$ and $$\begin{aligned}
\mathbb{P}_{\mathcal{D},Y}[\overline{x_i}{\boldsymbol{\mid}}\mathcal{Y}] & \geq e^{-\epsilon}\mathbb{P}_{\mathcal{D}}[\overline{x_i}]. \label{MembershipPrivacy2}\end{aligned}$$
We discuss the adversary model considered here. Let $\mathcal{D}_{i,K}$ denote a distribution where ${{\mathbb{P}}\left[{x_i,x_K}\right]}=p$ and ${{\mathbb{P}}\left[{x_i',x_K}\right]}=1-p$ for some $p$. Define $\mathbb{D}_*{\stackrel{\text{def}}{=}}\cup_{\begin{subarray}{l}i\in \{1,\iffalse 2,\fi \ldots,n\},\\K\subseteq \{1,\iffalse 2,\fi \ldots,n\}\setminus\{i\}\end{subarray}}\mathcal{D}_{i,K}$. The adversary model will be captured by the family $\mathbb{D}_*$ of distributions. For simplicity, we will refer to $\langle\mathbb{D}_*,\epsilon\rangle$-membership privacy as $\epsilon$-membership privacy.
### From Bayesian differential privacy to membership privacy
\[lemBDPtoMP\] $\epsilon$-Bayesian differential privacy implies $\epsilon$-membership privacy.
\[lemfinegrained\]
A mechanism ${Y}$ achieves ${\epsilon}$-membership privacy under a family $\mathbb{D}$ of distributions, i.e., $\langle\mathbb{D},\epsilon\rangle$-MP, if and only if it holds for any distribution $\mathcal{D}\in\mathbb{D}$ that[^4]
[ ]{} ,& \[functionfexp1\]\
,& \[functionfexp2\]
We will explain that Lemma \[lemfinegrained\] implies the following corollary, which will be used to show Theorem \[lemBDPtoMP\].
\[corfinegrained\]
A mechanism ${Y}$ achieves ${\epsilon}$-membership privacy under a family $\mathbb{D}$ of distributions, i.e., $\langle\mathbb{D},\epsilon\rangle$-MP, if it holds for any distribution $\mathcal{D}\in\mathbb{D}$ that $\frac{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]} \leq e^{\epsilon}$.
**Proof of Theorem \[lemBDPtoMP\] using Corollary \[corfinegrained\].** Under distribution $\mathcal{D}_{i,K}$ where ${{\mathbb{P}}\left[{x_i,x_K}\right]}=p$ and ${{\mathbb{P}}\left[{x_i',x_K}\right]}=1-p$ for some $p$, we have $$\begin{aligned}
\mathbb{P}_{\mathcal{D}_{i,K},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i] & = {{\mathbb{P}}\big[{Y(x_i,x_K,X_{\overline{K}})\in \mathcal{Y}}\big]}, \label{eqpdyxi1}\end{aligned}$$ and $$\begin{aligned}
\mathbb{P}_{\mathcal{D}_{i,K},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] & = {{\mathbb{P}}\big[{Y(x_i',x_K,X_{\overline{K}})\in \mathcal{Y}}\big]}.\label{eqpdyxi2}\end{aligned}$$ Under $\epsilon$-Bayesian differential privacy, we have $\frac{{{\mathbb{P}}\big[{Y(x_i,x_K,X_{\overline{K}})\in \mathcal{Y}}\big]}}{{{\mathbb{P}}\big[{Y(x_i',x_K,X_{\overline{K}})\in \mathcal{Y}}\big]}}$\
$ \leq e^{\epsilon}$, which along with (\[eqpdyxi1\]) and (\[eqpdyxi2\]) yields $\frac{\mathbb{P}_{\mathcal{D}_{i,K},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D}_{i,K},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]} \leq e^{\epsilon}$, then $\langle\mathbb{D}_*,\epsilon\rangle$-membership privacy (i.e., $\epsilon$-membership privacy) follows for $\mathbb{D}_*{\stackrel{\text{def}}{=}}\cup_{\begin{subarray}{l}i\in \{1,\iffalse 2,\fi \ldots,n\},\\K\subseteq \{1,\iffalse 2,\fi \ldots,n\}\setminus\{i\}\end{subarray}}\mathcal{D}_{i,K}$.
**Proof of Corollary \[corfinegrained\] using Lemma \[lemfinegrained\].** Note that (\[functionfexp1\]) and (\[functionfexp2\]) in Lemma \[lemfinegrained\] can be written as $\frac{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]} \leq g\big(\mathbb{P}_{\mathcal{D}}[x_i]\big)$, where $g(b)$ is a function defined as follows: $$\begin{aligned}
g(b) & {\stackrel{\text{def}}{=}}\begin{cases}
\frac{1-b}{e^{-\epsilon}-b},&\textrm{if $0 \leq b \leq \frac{1}{1+e^{\epsilon}}$,} \vspace{5pt}\\
\frac{e^{\epsilon}-1+b}{b},&\textrm{if $\frac{1}{1+e^{\epsilon}} < b \leq 1$.} \label{functiongexpr}
\end{cases}\end{aligned}$$ The function $g(b)$ increases as $b$ increases for $0 \leq b \leq \frac{1}{1+e^{\epsilon}}$ and decreases as $b$ increases for $\frac{1}{1+e^{\epsilon}} < b \leq 1$. Hence, at $ b = 0$ or $b=1$, $g(b)$ takes its minimum $g(0)=g(1)=e^{\epsilon}$. Then $\frac{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]} \leq e^{\epsilon}$ implies $\frac{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]} \leq g\big(\mathbb{P}_{\mathcal{D}}[x_i]\big)$ for any $\mathbb{P}_{\mathcal{D}}[x_i]$. In view this, we obtain Corollary \[corfinegrained\] from Lemma \[lemfinegrained\].
**Proof of Lemma \[lemfinegrained\].** For simplicity, we define $$\begin{aligned}
A &{\stackrel{\text{def}}{=}}\frac{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
x_i}\right]}}{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
\overline{x_i}}\right]}}, \label{equiva2defmn}\end{aligned}$$
Then the goal of Lemma \[lemfinegrained\] is to show the combination of (\[MembershipPrivacy1\]) and (\[MembershipPrivacy2\]) is equivalent to $A \leq g\big(\mathbb{P}_{\mathcal{D}}[x_i]\big)$. Hence, we will establish Lemma \[lemfinegrained\] once proving the following three results: $$\begin{aligned}
(\ref{MembershipPrivacy1})& \Longleftrightarrow \Big\{ 1- \mathbb{P}_{\mathcal{D}}[x_i] \geq A \big(e^{-\epsilon} - \mathbb{P}_{\mathcal{D}}[x_i]\big) \Big\}, \label{equiva1}\\
(\ref{MembershipPrivacy2})& \Longleftrightarrow \Big\{A \times \mathbb{P}_{\mathcal{D}}[x_i] + 1 - \mathbb{P}_{\mathcal{D}}[x_i] \leq e^{\epsilon}\Big\}, \label{equiva2}\end{aligned}$$ and $$\begin{aligned}
&\left.\begin{aligned}
1- \mathbb{P}_{\mathcal{D}}[x_i] \geq A \big(e^{-\epsilon} - \mathbb{P}_{\mathcal{D}}[x_i]\big), \\
\text{and }A \times \mathbb{P}_{\mathcal{D}}[x_i] + 1 - \mathbb{P}_{\mathcal{D}}[x_i] \leq e^{\epsilon}
\end{aligned}\right\rbrace \Longleftrightarrow A \leq g\big(\mathbb{P}_{\mathcal{D}}[x_i]\big).
\label{equiva3}\end{aligned}$$ Below we demonstrate (\[equiva1\]) (\[equiva2\]) and (\[equiva3\]), respectively.
**Proving (\[equiva1\]):**
By Bayes’ theorem, it holds that $$\begin{aligned}
&
{{\mathbb{P}_{\mathcal{D},Y}}\left[{ x_i\boldsymbol{\mid}
\mathcal{Y}}\right]} = \frac{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
x_i}\right]}\mathbb{P}_{\mathcal{D}}[x_i]}{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}}\right]}}.\label{psq1Bayesianprivacya}\end{aligned}$$ Given (\[psq1Bayesianprivacya\]), we have $$\begin{aligned}
&(\ref{MembershipPrivacy1}) \Longleftrightarrow {{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
x_i}\right]} } \leq e^{\epsilon}\times {{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}}\right]}}
.\label{psq1Bayesianprivacyb}\end{aligned}$$ To prove (\[psq1Bayesianprivacyb\]), we express ${{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}}\right]}}$ by the law of total probability, and find $$\begin{aligned}
&{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y} }\right]}
= {{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
x_i}\right]}\mathbb{P}_{\mathcal{D}}[x_i] +
{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
\overline{x_i}}\right]}\mathbb{P}_{\mathcal{D}}[\overline{x_i}] .\label{psq1Bayesianprivacyc}\end{aligned}$$ Applying (\[equiva2defmn\]) to (\[psq1Bayesianprivacyc\]), we obtain $$\begin{aligned}
&{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y} }\right]}= {{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
x_i}\right]} \times \Big\{\mathbb{P}_{\mathcal{D}}[x_i]
+ A^{-1}\times
\mathbb{P}_{\mathcal{D}}[\overline{x_i}]\Big\} .
\label{psq1Bayesianprivacy3alignz} \end{aligned}$$ Then it follows from (\[psq1Bayesianprivacyb\]) and (\[psq1Bayesianprivacy3alignz\]) that $$\begin{aligned}
(\ref{MembershipPrivacy1}) &\Longleftrightarrow \mathbb{P}_{\mathcal{D}}[x_i]
+ A^{-1}\times
\mathbb{P}_{\mathcal{D}}[\overline{x_i}] \geq e^{-\epsilon}
\nonumber \\ & \Longleftrightarrow
1- \mathbb{P}_{\mathcal{D}}[x_i] \geq A \big(e^{-\epsilon} - \mathbb{P}_{\mathcal{D}}[x_i]\big); \nonumber\end{aligned}$$ i.e., (\[equiva1\]) is established.
**Proving (\[equiva2\]):**
By Bayes’ theorem, it holds that $$\begin{aligned}
&
{{\mathbb{P}_{\mathcal{D},Y}}\left[{ \overline{x_i}\boldsymbol{\mid}
\mathcal{Y}}\right]} = \frac{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
\overline{x_i}}\right]}\mathbb{P}_{\mathcal{D}}[\overline{x_i}]}{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}}\right]}}.\label{part2psq1Bayesianprivacya}\end{aligned}$$ Given (\[part2psq1Bayesianprivacya\]), we have $$\begin{aligned}
(\ref{MembershipPrivacy2}) \Longleftrightarrow&{{{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
\overline{x_i}}\right]} } \geq e^{-\epsilon}\times {{{\mathbb{P}_{\mathcal{D},Y}}\left[{Y(\boldsymbol{\mathscr{X}})\neq\boldsymbol{y}}\right]}}
.\label{part2psq1Bayesianprivacybs}\end{aligned}$$ We recall (\[psq1Bayesianprivacy3alignz\]). Applying (\[equiva2defmn\]) to (\[psq1Bayesianprivacy3alignz\]), we obtain $$\begin{aligned}
& {{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y} }\right]}\nonumber \\ & = A \times {{\mathbb{P}_{\mathcal{D},Y}}\left[{\mathcal{Y}\boldsymbol{\mid}
\overline{x_i}}\right]} \times \Big\{\mathbb{P}_{\mathcal{D}}[x_i]
+ A^{-1}\times
\mathbb{P}_{\mathcal{D}}[\overline{x_i}]\Big\} .
\label{psq1Bayesianprivacy3alignzpt2} \end{aligned}$$ Then it follows from (\[part2psq1Bayesianprivacybs\]) and (\[psq1Bayesianprivacy3alignzpt2\]) that $$\begin{aligned}
(\ref{MembershipPrivacy2}) &\Longleftrightarrow A \times \Big\{\mathbb{P}_{\mathcal{D}}[x_i]
+ A^{-1}\times
{{\mathbb{P}_{\mathcal{D}}}\left[{\overline{x_i}}\right]}\Big\} \leq e^{\epsilon}
\nonumber \\ & \Longleftrightarrow
A \times\mathbb{P}_{\mathcal{D}}[x_i] + 1 - \mathbb{P}_{\mathcal{D}}[x_i] \leq e^{\epsilon} ; \nonumber\end{aligned}$$ i.e., (\[equiva2\]) is established.
**Proving (\[equiva3\]):**
With $\mathbb{P}_{\mathcal{D}}[x_i]$ replaced by real $x \in [0,1]$, (\[equiva3\]) will follow once we show for $x \in [0,1]$ that $$\begin{aligned}
\left.\begin{aligned}
1- x \geq A \big(e^{-\epsilon} - x\big), \\
\text{and }A \times x + 1 - x \leq e^{\epsilon}
\end{aligned}\right\rbrace & \Longleftrightarrow A \leq g(x).
\label{equiva3x}\end{aligned}$$
We first prove the “$\Longrightarrow$” part in (\[equiva3x\]). If $0 \leq x < e^{-\epsilon}$, we obtain from $1- x \geq A \big(e^{-\epsilon} - x\big)$ that $A \leq \frac{1-x}{e^{-\epsilon}-x}$. If $0<x \leq 1$, we obtain from $A \times x + 1 - x \leq e^{\epsilon}$ that $A \leq \frac{e^{\epsilon}-1+x}{x}$. With $g_1(x)$ denoting $\frac{1-x}{e^{-\epsilon}-x}$ for $0 \leq x < e^{-\epsilon}$ and $g_2(x)$ denoting $\frac{e^{\epsilon}-1+x}{x}$ for $0<x \leq 1$, we see that $g(x)$ equals $g_1(x)$ if $0 \leq x \leq \frac{1}{1+e^{\epsilon}}$, and equals $g_2(x)$ if $\frac{1}{1+e^{\epsilon}} < x \leq 1$. Given the above, if $0 \leq x \leq \frac{1}{1+e^{\epsilon}}$, we have $A \leq g_1(x) = g(x)$, and if $\frac{1}{1+e^{\epsilon}} < x \leq 1$, we have $A \leq g_2(x) = g(x)$. Hence, the “$\Longrightarrow$” part in (\[equiva3x\]) immediately follows.
We then prove the “$\Longleftarrow$” part in (\[equiva3x\]). For any $x \in [0,1]$, we will establish i) $1- x \geq A \big(e^{-\epsilon} - x\big)$, and ii) $A \times x + 1 - x \leq e^{\epsilon}$, respectively. We still use $g_1(x)$ and $g_2(x)$ defined above. Note that $g_1(x)$ is only defined for $0 \leq x < e^{-\epsilon}$ and $g_2(x)$ is only defined for $0<x \leq 1$. It is straightforward to show $g_1(x)\leq g_2(x)$ if $0 < x \leq \frac{1}{1+e^{\epsilon}}$, and $g_1(x)\geq g_2(x)$ if $\frac{1}{1+e^{\epsilon}} < x < e^{-\epsilon}$.
- If $0 \leq x \leq \frac{1}{1+e^{\epsilon}}$, we obtain from $A \leq g(x) = g_1(x)$ that $A \leq \frac{1-x}{e^{-\epsilon}-x}$, implying $1- x \geq A \big(e^{-\epsilon} - x\big)$. If $\frac{1}{1+e^{\epsilon}} < x < e^{-\epsilon} $, we obtain from $A \leq g(x) = g_2(x)\leq g_1(x)$ that $A \leq \frac{1-x}{e^{-\epsilon}-x}$, yielding $1- x \geq A \big(e^{-\epsilon} - x\big)$. If $e^{-\epsilon} \leq x \leq 1$, it holds that $1- x \geq 0 \geq A \big(e^{-\epsilon} - x\big)$. To summarize, for any $x \in [0,1]$, it follows that $1- x \geq A \big(e^{-\epsilon} - x\big)$.
- If $\frac{1}{1+e^{\epsilon}} < x \leq 1$, we obtain from $A \leq g(x) = g_2(x)$ that $A \leq \frac{e^{\epsilon}-1+x}{x}$, implying $A \times x + 1 - x \leq e^{\epsilon}$. If $0 < x \leq \frac{1}{1+e^{\epsilon}}$, we obtain from $A \leq g(x) = g_1(x)\leq g_2(x)$ that $A \leq \frac{e^{\epsilon}-1+x}{x}$, yielding $A \times x + 1 - x \leq e^{\epsilon}$. If $x=0$, we have $A \times x + 1 - x =1 \leq e^{\epsilon}$. To summarize, for any $x \in [0,1]$, it follows that $1- x \geq A \big(e^{-\epsilon} - x\big)$.
(\[equiva3x\]) is proved since its “$\Longrightarrow$” and “$\Longleftarrow$” both hold.
### From membership privacy to Bayesian differential privacy
\[lemMPtoBDP\] For a family of distributions that are downward scalable in the sense of Li *et al*. [@Li-CCS2013], $\epsilon$-membership privacy implies $\epsilon$-Bayesian differential privacy.
**Proof of Theorem \[lemMPtoBDP\].** The proof is similar to that of [@Li-CCS2013 Theorem 3.6]. For completeness, we still present the details below.
Assume, for the sake of contradiction, that mechanism ${Y}$ achieves ${\epsilon}$-membership privacy yet does not satisfy $\epsilon$-Bayesian differential privacy. Then there exists a distribution $\mathcal{D}$ and entity $x_i$ such that $0 < \mathbb{P}_{\mathcal{D}}[x_i] < 1$ and $\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]>e^{\epsilon}\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]$. We discuss two cases below.
Case one: $ \mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]= 0$ and $\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i] > 0$. Since $\mathbb{D}$ is downward scalable, by definition $\mathbb{D}$ contains some $\mathbb{D}'$ which is $x_i$-scaled from $\mathbb{D}$ such that $\mathbb{P}_{\mathcal{D}'}[x_i]<e^{-\epsilon}$. From [@Li-CCS2013 Lemma 3.4], we have $\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] = \mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] $, which with the case condition $ \mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] = 0$ means $\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] = 0$, further yielding $\mathbb{P}_{\mathcal{D}',Y}[x_i {\boldsymbol{\mid}}\mathcal{Y}] = 1$. Therefore, $\mathbb{P}_{\mathcal{D}',Y}[x_i {\boldsymbol{\mid}}\mathcal{Y}] = 1>e^{\epsilon}\mathbb{P}_{\mathcal{D}'}[x_i]$, which contradicts the fact that ${Y}$ achieves ${\epsilon}$-membership privacy.
Case two: $\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]=\alpha\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]$, where $\alpha > e^{\epsilon}$. Since $\mathbb{D}$ is downward scalable, by definition $\mathbb{D}$ contains some $\mathbb{D}'$ which is $x_i$-scaled from $\mathbb{D}$ such that $\mathbb{P}_{\mathcal{D}'}[x_i]=q$ for an arbitrarily small $q$ (see [@Li-CCS2013] for the meaning of “\*-scaled”). From [@Li-CCS2013 Lemma 3.4], we have $\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}{x_i}] = \mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}{x_i}] $ and $\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] = \mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}] $. These with the case condition $\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]=\alpha\mathbb{P}_{\mathcal{D},Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]$ gives $\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}x_i]=\alpha\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]$. Then, under $\mathcal{D}'$, we have $$\begin{aligned}
\frac{\mathbb{P}_{\mathcal{D}',Y}[x_i {\boldsymbol{\mid}}\mathcal{Y}]}{\mathbb{P}_{\mathcal{D}'}[x_i]} &=\frac{\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y} {\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}]} \nonumber \\ &=\frac{\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y} {\boldsymbol{\mid}}x_i]}{\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y} {\boldsymbol{\mid}}x_i]\mathbb{P}_{\mathcal{D}'}[x_i]+\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y} {\boldsymbol{\mid}}\overline{x_i}]\mathbb{P}_{\mathcal{D}'}[\overline{x_i}]} \nonumber \\ & =\frac{\alpha\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]}{\alpha\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y}{\boldsymbol{\mid}}\overline{x_i}]\cdot q+\mathbb{P}_{\mathcal{D}',Y}[\mathcal{Y} {\boldsymbol{\mid}}\overline{x_i}]\cdot (1-q)} \nonumber \\ &= \frac{\alpha}{\alpha q + 1-q}.\end{aligned}$$ The above ratio $\frac{\alpha}{\alpha q + 1-q}$ is greater than $e^{\epsilon}$ given $\alpha > e^{\epsilon}$, once we ensure $q<\frac{\alpha - e^{\epsilon}}{e^{\epsilon}(\alpha -1)}$. This will give $\mathbb{P}_{\mathcal{D}',Y}[x_i {\boldsymbol{\mid}}\mathcal{Y}] >e^{\epsilon}\mathbb{P}_{\mathcal{D}'}[x_i]$, which contradicts the fact that ${Y}$ achieves ${\epsilon}$-membership privacy.
Summarizing the above two cases, we have proved the desired result.
Related Work {#related}
============
The notion of *differential privacy* (DP) [@Dwork2006; @dwork2006calibrating] provides a rigorous foundation for privacy protection. Intuitively, DP implies that changing one entry in the database does not significantly change the query output, so that an adversary, seeing the query output and knowing all records except the one to be inferred, draws almost the same conclusion on whether or not a record is in the database. Differential privacy has received considerable interest in the literature [@wang2017privsuper; @6686180; @zhang2015private; @tramer2015differential; @qin2016heavy; @Jiang:2013:PTD:2484838.2484846; @erlingsson2014rappor; @acs2017differentially; @zhao2017preserving; @mohasselsecureml]. Yang *et al.* [@yang2015bayesian] and Liu *et al.* [@Changchang2016] propose Bayesian differential privacy and dependent differential privacy respectively to generalize differential privacy for correlated data. Kasiviswanathan and Smith [@kasiviswanathan2014semantics] propose a Bayesian formulation of semantic privacy, inspired by the following interpretation of differential privacy explained in [@Dwork2006]: *Regardless of external knowledge, an adversary with access to the sanitized database draws the same conclusions whether or not any individual data is included in the original database*. To present the notion of semantic privacy, Kasiviswanathan and Smith model the external knowledge via a prior probability distribution, and model conclusions via the corresponding posterior distribution. Li *et al.* [@Li-CCS2013] introduce membership privacy (MP) in consideration of the adversary’s prior beliefs.
Dwork and Rothblum [@dwork2016concentrated] recently proposed the notion of *concentrated differential privacy*, a relaxation of differential privacy achieving better accuracy than differential privacy without compromising on cumulative privacy cost over multiple computations. Motivated by [@dwork2016concentrated], Bun and Steinke [@bun2016concentrated] suggest a relaxation of concentrated differential privacy. Instead of treating the privacy loss as a subgaussian random variable as [@dwork2016concentrated] does, Bun and Steinke [@bun2016concentrated] instead formulate the problem in terms of Renyi entropy, giving a relaxation of concentrated differential privacy. Jorgensen *et al.* [@jorgensen2015conservative] introduce a new privacy definition called personalized differential privacy, a generalization of differential privacy in which users specify a personal privacy level for their data. They show that by accepting that not all users demand the same level of privacy, a higher level of utility can often be obtained by not providing excess privacy budget to those who do not need it. They present a mechanism for achieving personalized differential privacy, inspired by the well-known exponential mechanism of differential privacy. Hall *et al.* [@hall2012random] introduce additional randomness to extend differential privacy to the notion of random differential privacy. Compared with differential privacy, Lee and Clifton [@lee2012differential] give an alternate formulation, differential identifiability, parameterized by the probability of individual identification. Their notion provides the strong privacy guarantees of differential privacy, while allowing policy makers to set parameters based on the privacy concept of individual identifiability.
Bohli and Andreas [@bohli2011relations] discuss the relations among several privacy definitions, but the discussion does not cover differential privacy. Li *et al.* [@li2012sampling] present the relation between $k$-anonymization and differential privacy, where the $k$-anonymity notion by [@sweeney2002k; @samarati2001protecting] means that when only quasi-identifiers are considered, each record in a $k$-anonymized dataset should appear at least $k$ times. Wang *et al.* [@wang2016relation] analyze the relation between differential privacy, mutual-information privacy, and identifiability. Mironov [@mironov2009computational] present several relaxations of differential privacy by requiring privacy guarantees to hold only against computationally bounded adversaries. They establish various relations among these notions, and show that the notions exhibit close connection with the theory of pseudodense sets [@reingold2008dense].
Conclusion {#sec:Conclusion}
==========
In this paper, we present a comprehensive view of the relations among different privacy notions: differential privacy (DP), Bayesian differential privacy (BDP), semantic privacy (SP), and membership privacy (MP). In particular, we extend the notion of semantic privacy (SP) to Bayesian semantic privacy (BSP) and prove its essential equivalence with Bayesian differential privacy (BDP) in the quantitative sense. We show the relations between BDP, BSP, and SP as follows: , and . Moreover, we the following relations between BDP and MP. First, . Second, For a family of distributions that are downward scalable in the sense of
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” *Foundations and Trends in Theoretical Computer Science*, vol. 9, no. 3–4, pp. 211–407, 2014.
B. Yang, I. Sato, and H. Nakagawa, “[Bayesian]{} differential privacy on correlated data,” in *ACM SIGMOD International Conference on Management of Data (SIGMOD)*, 2015, pp. 747–762.
S. Kasiviswanathan and A. Smith, “On the ‘semantics’ of differential privacy: [A]{} [Bayesian]{} formulation,” *Journal of Privacy and Confidentiality*, vol. 6, no. 1, pp. 1–16, 2014.
N. Li, W. Qardaji, D. Su, Y. Wu, and W. Yang, “Membership privacy: [A]{} unifying framework for privacy definitions,” in *ACM Conference on Computer and Communications Security (CCS)*, 2013, pp. 889–900.
C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in *Theory of Cryptography Conference (TCC)*, 2006, pp. 265–284.
M. E. Andr[é]{}s, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Geo-indistinguishability: Differential privacy for location-based systems,” in *ACM Conference on Computer and Communications Security (CCS)*, 2013, pp. 901–914.
J. Zhang, X. Xiao, and X. Xie, “Priv[T]{}ree: [A]{} differentially private algorithm for hierarchical decompositions,” in *ACM SIGMOD International Conference on Management of Data (SIGMOD)*, 2016, pp. 155–170.
J. Blocki, A. Datta, and J. Bonneau, “Differentially private password frequency lists,” in *Network and Distributed System Security (NDSS) Symposium*, 2016.
Y. Xiao and L. Xiong, “Protecting locations with differential privacy under temporal correlations,” in *ACM Conference on Computer and Communications Security (CCS)*, 2015, pp. 1298–1309.
X. Lou, R. Tan, D. K. Yau, and P. Cheng, “Cost of differential privacy in demand reporting for smart grid economic dispatch,” in *IEEE Conference on Computer Communications (INFOCOM)*, 2017.
R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in *ACM Conference on Computer and Communications Security (CCS)*, 2015, pp. 1310–1321.
S. Song and K. Chaudhuri, “Composition properties of inferential privacy for time-series data,” in *Allerton Conference on Communication, Control, and Computing*, October 2017.
S. Song, Y. Wang, and K. Chaudhuri, “Pufferfish privacy mechanisms for correlated data,” in *ACM SIGMOD International Conference on Management of Data (SIGMOD)*, 2017, pp. 1291–1306.
R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in *IEEE Symposium on Security and Privacy*, 2017, pp. 3–18.
N. Phan, X. Wu, H. Hu, and D. Dou, “Adaptive [Laplace]{} mechanism: [D]{}ifferential privacy preservation in deep learning,” in *IEEE International Conference on Data Mining series (ICDM)*, 2017.
J. Tang, A. Korolova, X. Bai, X. Wang, and X. Wang, “Privacy loss in [Apple’s]{} implementation of differential privacy on [MacOS]{} 10.12,” *arXiv preprint arXiv:1709.02753*, 2017.
. Erlingsson, V. Pihur, and A. Korolova, “[RAPPOR]{}: Randomized aggregatable privacy-preserving ordinal response,” in *ACM Conference on Computer and Communications Security (CCS)*, 2014, pp. 1054–1067.
N. Wang, X. Xiao, Y. Yang, Z. Zhang, Y. Gu, and G. Yu, “Priv[S]{}uper: [A]{} superset-first approach to frequent itemset mining under differential privacy,” in *IEEE International Conference on Data Engineering (ICDE)*, 2017, pp. 809–820.
R. Bassily, A. Groce, J. Katz, and A. Smith, “Coupled-worlds privacy: [E]{}xploiting adversarial uncertainty in statistical data privacy,” in *IEEE Symposium on Foundations of Computer Science (FOCS)*, 2013, pp. 439–448.
J. Zhang, G. Cormode, C. M. Procopiuc, D. Srivastava, and X. Xiao, “Private release of graph statistics using ladder functions,” in *ACM SIGMOD International Conference on Management of Data (SIGMOD)*, 2015, pp. 731–745.
F. Tram[è]{}r, Z. Huang, J.-P. Hubaux, and E. Ayday, “Differential privacy with bounded priors: Reconciling utility and privacy in genome-wide association studies,” in *ACM Conference on Computer and Communications Security (CCS)*, 2015, pp. 1286–1297.
Z. Qin, Y. Yang, T. Yu, I. Khalil, X. Xiao, and K. Ren, “Heavy hitter estimation over set-valued data with local differential privacy,” in *ACM Conference on Computer and Communications Security (CCS)*, 2016, pp. 192–203.
K. Jiang, D. Shao, S. Bressan, T. Kister, and K.-L. Tan, “Publishing trajectories with differential privacy guarantees,” in *International Conference on Scientific and Statistical Database Management (SSDBM)*, 2013, pp. 12:1–12:12.
G. Acs, L. Melis, C. Castelluccia, and E. De Cristofaro, “Differentially private mixture of generative neural networks,” in *IEEE International Conference on Data Mining series (ICDM)*, 2017.
J. Zhao and J. Zhang, “Preserving privacy enables “coexistence equilibrium” of competitive diffusion in social networks,” *IEEE Transactions on Signal and Information Processing over Networks*, vol. 3, no. 2, pp. 282–297, June 2017.
P. Mohassel and Y. Zhang, “[SecureML: A]{} system for scalable privacy-preserving machine learning,” in *IEEE Symposium on Security and Privacy*, 2017, pp. 19–38.
C. Liu, S. Chakraborty, and P. Mittal, “Dependence makes you vulnerable: [D]{}ifferential privacy under dependent tuples,” in *Network and Distributed System Security (NDSS) Symposium*, 2016.
C. Dwork and G. N. Rothblum, “Concentrated differential privacy,” *arXiv preprint arXiv:1603.01887*, 2016.
M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in *Theory of Cryptography Conference*.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 635–658.
Z. Jorgensen, T. Yu, and G. Cormode, “Conservative or liberal? [P]{}ersonalized differential privacy,” in *International Conference on Data Engineering (ICDE)*, 2015, pp. 1023–1034.
R. Hall, A. Rinaldo, and L. Wasserman, “Random differential privacy,” *Journal of Privacy and Confidentiality*, vol. 4, no. 2, pp. 43–59, 2012.
J. Lee and C. Clifton, “Differential identifiability,” in *ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)*, 2012, pp. 1041–1049.
J.-M. Bohli and A. Pashalidis, “Relations among privacy notions,” *ACM Transactions on Information and System Security (now ACM Transactions on Privacy and Security)*, vol. 14, no. 1, p. 4, 2011.
N. Li, W. Qardaji, and D. Su, “On sampling, anonymization, and differential privacy or, k-anonymization meets differential privacy,” in *ACM Symposium on Information, Computer and Communications Security (AsiaCCS)*, 2012, pp. 32–33.
L. Sweeney, “$k$-[A]{}nonymity: [A]{} model for protecting privacy,” *International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems*, vol. 10, no. 05, pp. 557–570, 2002.
P. Samarati, “Protecting respondents identities in microdata release,” *IEEE Transactions on Knowledge and Data Engineering*, vol. 13, no. 6, pp. 1010–1027, 2001.
W. Wang, L. Ying, and J. Zhang, “On the relation between identifiability, differential privacy, and mutual-information privacy,” *IEEE Transactions on Information Theory*, vol. 62, no. 9, pp. 5018–5029, 2016.
I. Mironov, O. Pandey, O. Reingold, and S. Vadhan, “Computational differential privacy,” in *Advances in Cryptology (CRYPTO)*, 2009, pp. 126–142.
O. Reingold, L. Trevisan, M. Tulsiani, and S. Vadhan, “Dense subsets of pseudorandom sets,” in *IEEE Symposium on Foundations of Computer Science (FOCS)*, 2008, pp. 76–85.
[^1]: $\langle\mathbb{D},\epsilon\rangle$-membership privacy actually corresponds to $\langle\mathbb{D},e^\epsilon\rangle$-positive membership privacy in [@Li-CCS2013]. Li *et al*. [@Li-CCS2013] use $\gamma$ and $\gamma^{-1}$ instead of $e^{\epsilon}$ and $e^{-\epsilon}$ in (\[MembershipPrivacy1\]) and (\[MembershipPrivacy2\]) to define $\langle\mathbb{D},\gamma\rangle$-membership privacy. We use $e^{\epsilon}$ and $e^{-\epsilon}$ here for better comparison between membership privacy and Bayesian differential privacy. Also, by membership privacy, we mean positive membership privacy of [@Li-CCS2013]. We do not discuss negative membership privacy of [@Li-CCS2013].
[^2]: For simplicity, only discrete probability distributions are discussed. The results can be readily extended to the continuous case.
[^3]: $\langle\mathbb{D},\epsilon\rangle$-membership privacy actually corresponds to $\langle\mathbb{D},e^\epsilon\rangle$-membership privacy in [@Li-CCS2013]. Li *et al*. [@Li-CCS2013] use $\gamma$ and $\gamma^{-1}$ instead of $e^{\epsilon}$ and $e^{-\epsilon}$ in (\[MembershipPrivacy1\]) and (\[MembershipPrivacy2\]) to define $\langle\mathbb{D},\gamma\rangle$-membership privacy. We use $e^{\epsilon}$ and $e^{-\epsilon}$ here for better comparison between membership privacy and Bayesian differential privacy.
[^4]: We let $\frac{0}{0}=1$ and $\frac{\textrm{non-zero}}{0}=\infty$ to address the degenerate cases.
|
---
abstract: 'The d-inverse is a generalized notion of inverse of a stochastic process having a certain tendency of increasing expectations. Scaling limit of the d-inverse of Brownian motion with functional drift is studied. Except for degenerate case, the class of possible scaling limits is proved to consist of the d-inverses of Brownian motion without drift, one with explosion in finite time, and one with power drift.'
---
[**Scaling limit of d-inverse of Brownian motion with functional drift**]{}
Kouji Yano[^1][^2] Katsutoshi Yoshioka[^3]
[Keywords and phrases: d-inverse, domain of attraction, Brownian motion with drift, geometric Brownian motion, option price, Black-Scholes formula]{}\
[AMS 2010 subject classifications: Primary 60F05; secondary 60J65; 60J70. ]{}
Introduction
============
For (general) stock price $ S=(S_t)_{t \ge 0} $, the European call option price with strike $ K $ and maturity $ t $ is given as $$\begin{aligned}
C(t) := E {\left[ \max {\left\{ S_t-K , 0 \right\}} \right]} .
\label{}\end{aligned}$$ Suppose that the stock price is given as the [*geometric Brownian motion*]{} with volatility $ \sigma > 0 $ and drift $ \mu \in {\ensuremath{\mathbb{R}}}$: $$\begin{aligned}
{{\rm d}}S_t = \sigma S_t {{\rm d}}B_t + \mu S_t {{\rm d}}t
, \quad S_0 = s_0\in (0,\infty) ,
\label{}\end{aligned}$$ where $ B=(B_t)_{t \ge 0} $ denotes a one-dimensional standard Brownian motion. Letting $ {\widetilde}{\mu} = \mu - \sigma^2/2 $, we have an explicit expression of $ S=S^{(\sigma,\mu)} $ as follows: $$\begin{aligned}
S^{(\sigma,\mu)}_t = s_0 \exp {\left( \sigma B_t + {\widetilde}{\mu} t \right)} .
\label{eq: gbm}\end{aligned}$$ If $ {\widetilde}{\mu} = - \sigma^2/2 $, then we may express $ C(t) $ explicitly, in terms of the cumulative distribution function $ {\ensuremath{\mathcal{N}}}(x) = \int_{-\infty }^x {{\rm e}}^{-x^2/2} {{\rm d}}x / \sqrt{2 \pi} $ of the standard Gaussian, as $$\begin{aligned}
C(t) =
s_0 {\ensuremath{\mathcal{N}}}{\left( - \frac{1}{\sigma \sqrt{t}} \log \frac{K}{s_0} + \frac{1}{2} \sigma \sqrt{t} \right)}
-
K {\ensuremath{\mathcal{N}}}{\left( - \frac{1}{\sigma \sqrt{t}} \log \frac{K}{s_0} - \frac{1}{2} \sigma \sqrt{t} \right)} ,
\label{}\end{aligned}$$ which is a special case of the well-known Black–Scholes formula. We may verify, by a direct computation, that $ C(t) $ is increasing in $ t>0 $; see Madan–Roynette–Yor [@MRY-put].
Note that $ S^{(\sigma,\mu)} $ is a submartingale if and only if $ \mu \ge 0 $. In this case, we can verify, without computing it explicitly, that $ C(t) $ is increasing in $ t > 0 $. (In this paper, we mean non-decreasing by increasing.) More generally, for any increasing convex function $ \varphi $, we may apply Jensen’s inequality to see that $$\begin{aligned}
E[\varphi(S_s)] \le E[\varphi(E[S_t|{\ensuremath{\mathcal{F}}}_s])] \le E[\varphi(S_t)]
, \quad 0 < s < t .
\label{}\end{aligned}$$ In this sense, the submartingale property may be considered a tendency of increasing expectations.
To characterize another tendency of increasing expectations, The following notion was introduced by Madan–Roynette–Yor [@MRY-opt] and was developed by Profeta–Roynette–Yor [@MR2582990]:
Let $ R=(R_t)_{t \ge 0} $ denote a stochastic process taking values on $ [0,\infty ) $ defined on a measurable space equipped with a family of probability measures $ (P_x)_{x \ge 0} $. Suppose that $ R $ is a.s. continuous and such that $ P_x(R_0=x)=1 $ for all $ x \ge 0 $.
1. $ R $ is said [*to admit an increasing pseudo-inverse*]{} if $ P_x(R_t \ge y) $ is increasing in $ t \ge 0 $ for all $ y>x $ and if $ P_x(R_t \ge y) \to 1 $ as $ t \to \infty $ for all $ y>x $.
2. A family of random variables $ (Y_{x,y})_{y>x} $ defined on a probability space $ (\Omega,{\ensuremath{\mathcal{F}}},P) $ is called [*pseudo-inverse*]{} of $ R $ if for any $ y>x $ it holds that $$\begin{aligned}
P_x(R_t \ge y) = P(Y_{x,y} \le t) .
\label{eq: pseudo-inverse definition}\end{aligned}$$
We would like here to introduce the following alternative notion, which is a slight modification of the pseudo-inverse:
Let $ x_0 \in {\ensuremath{\mathbb{R}}}$. Let $ X=(X_t)_{t \ge 0} $ be a stochastic process taking values in $ [-\infty ,\infty ] $.
1. $ X $ is called [*d-increasing*]{} on $ [x_0,\infty ) $ if $ P(X_t \ge x) $ is increasing in $ t \in (0,\infty ) $ for all $ x \in [x_0,\infty ) $.
2. A family of random variables $ (Y_x)_{x \ge x_0} $ is called [*d-inverse*]{} of $ X $ on $ [x_0,\infty ) $ if the following assertions hold: for any $ x \in [x_0,\infty ) $, the $ Y_x $ is a random variable taking values in $ [0,\infty ] $; for any $ x \in [x_0,\infty ) $ and for a.e. $ t \in (0,\infty ) $, it holds that $$\begin{aligned}
P(X_t \ge x) = P(Y_x \le t) .
\label{eq: d-inverse definition}\end{aligned}$$
We note that $ X $ is d-increasing on $ [x_0,\infty ) $ if and only if $ X $ admits some d-inverse $ (Y_x)_{x \ge x_0} $. We also note that if $ P(X_t \ge x) $ is right-continuous in $ t \in (0,\infty ) $, then the identity holds for all $ t \in (0,\infty ) $.
If $ t \mapsto X_t $ is a.s. increasing, then $ X $ is d-increasing and its d-inverse is given by its inverse in the usual sense. The d-inverse may be a generalized notion of inverse in the sense of probability distribution.
Let $ S $ be a stochastic process such that $ P(S_t \ge x) $ is right-continuous in $ t \in (0,\infty ) $. We note that $ S $ is d-increasing on $ [x_0,\infty ) $ if and only if $ E[\varphi(S_t)] $ is increasing in $ t>0 $ for all increasing (possibly non-convex) function $ \varphi $ whose support is contained in $ [x_0,\infty ) $ such that $ E[\varphi(S_t)] < \infty $ for all $ t>0 $. In fact, for the sufficiency, it holds that $$\begin{aligned}
E[\varphi(S_t)]
= \varphi(x_0)P(S_t \ge x_0) + \int_{x_0}^{\infty } P(S_t \ge x) {{\rm d}}\varphi(x)
, \quad t>0 ,
\label{}\end{aligned}$$ which shows that $ E[\varphi(S_t)] $ is increasing in $ t>0 $; the necessity is obvious since $$\begin{aligned}
E[1_{[x,\infty )}(S_t)] = P(S_t \ge x) .
\label{}\end{aligned}$$ In particular, if $ S $ is a non-negative process such that $ P(S_t \ge x) $ is right-continuous in $ t \in (0,\infty ) $, then the condition that $ S $ is d-increasing on $ [0,\infty ) $ is stronger than the one that $ S $ has the same one-dimensional marginals with a submartingale; see Remark \[rem: one-submart\].
In this paper, we confine ourselves to the class of processes of the form $$\begin{aligned}
B^{(\rho)}_t = B_t + \rho(t)
\label{}\end{aligned}$$ for some increasing function $ \rho(t) $. We may call $ B^{(\rho)} $ [*Brownian motion with functional drift*]{}. This process appears in [*geometric Brownian motion with functional coefficients*]{} as follows. Let $ \sigma(t) $ and $ \mu(t) $ be positive functions on $ [0,\infty ) $ and define $$\begin{aligned}
{{\rm d}}S_t = \sigma(t) S_t {{\rm d}}B_t + \mu(t) S_t {{\rm d}}t
, \quad S_0 = s_0 > 0.
\label{}\end{aligned}$$ The resulting process $ S=S^{(\sigma,\mu)} $ is given in the explicit form as $$\begin{aligned}
S^{(\sigma,\mu)}_t = s_0 \exp {\left( \int_0^t \sigma(s) {{\rm d}}B_s + \int_0^t {\widetilde}{\mu}(s) {{\rm d}}s \right)} ,
\label{}\end{aligned}$$ where $ {\widetilde}{\mu}(t) = \mu(t) - \sigma^2(t)/2 $. If we set $ a(t) = \int_0^t \sigma(u)^2 {{\rm d}}u $, $ b(t) = \int_0^t {\widetilde}{\mu}(u) {{\rm d}}u $ and set $ \rho(t) = b(a^{-1}(t)) $, then we obtain $$\begin{aligned}
S^{(\sigma,\mu)}_{a^{-1}(t)} = s_0 \exp {\left( \beta_t + \rho(t) \right)} ,
\label{}\end{aligned}$$ where $ \beta = (\beta_t)_{t \ge 0} $ denotes a new Brownian motion.
The aim of this paper is to study scaling limit of the d-inverse on $ [0,\infty ) $ of $ B^{(\rho)} $ for positive drift $ \rho $. By [*scaling limit*]{} of d-inverse $ Y^{(\rho)} = (Y^{(\rho)}_x)_{x \ge 0} $ of $ B^{(\rho)} $ we mean a process $ Z=(Z_x)_{x \ge 0} $ such that $$\begin{aligned}
\frac{1}{\lambda} Y^{(\phi_1(\lambda) \rho)}_{\phi_2(\lambda) x}
{\mathrel{\mathop{\longrightarrow}\limits^{\rm d}_{\lambda \to 0+}}} Z_x
\quad \text{for all $ x \in [0,\infty) $}
\label{}\end{aligned}$$ for some scaling functions $ \phi_1 $ and $ \phi_2 $. We assume that the ratio $ \phi_2(\lambda) / \sqrt{\lambda} $ converges to a constant as $ \lambda \to 0+ $. We shall prove that the class of possible scaling limits consists, except for degenerate case, of the d-inverses of the following processes:
1. Brownian motion without drift $ B_t $;
2. [*Brownian motion with explosion in finite time*]{}: $ B_t + \infty 1_{\{ t \ge t_0 \}} $, with $ t_0 \in (0,\infty ) $;
3. [*Brownian motion with power drift*]{}: $ B_t + c t^{\alpha } $, with $ c \in (0,\infty) $ and $ \alpha \ge 1/2 $.
Cases (i) and (ii) can be obtained from (iii) by taking limits; in fact, Case (i) can be obtained from (ii) as $ t_0 \to \infty $ and Case (ii) can be obtained from (iii) by setting $ c=t_0^{-\alpha } $ and letting $ \alpha \to \infty $.
Here we make several remarks.
Monotonicity of more general option prices for more general stock processes have been studied by Hobson ([@MR2563207], [@MR1620358]), Henderson–Hobson ([@MR1790134], [@MR1978894]), and Kijima [@MR1926239].
Let $ X^{(1)} $ and $ X^{(2)} $ be two random variables taking values in $ [-\infty ,\infty ] $ and let $ x_0 \in {\ensuremath{\mathbb{R}}}$. We write $$\begin{aligned}
X^{(1)} \le_{\rm st} X^{(2)}
\quad \text{on $ [x_0,\infty ) $}
\label{}\end{aligned}$$ if $$\begin{aligned}
P(X^{(1)} \ge x) \le P(X^{(2)} \ge x)
\quad \text{for all $ x \in [x_0,\infty ) $}.
\label{}\end{aligned}$$ The relation $ \le_{\rm st} $ on $ [x_0,\infty ) $ is a partial order on the class of random variables. It may be called [*usual stochastic order on $ [x_0,\infty ) $*]{} (see also Shaked–Shanthikumar [@MR2265633]). We point out that a process $ (X_t)_{t \ge 0} $ is d-increasing on $ [x_0,\infty ) $ if and only if $ t \mapsto X_t $ is increasing in d-order on $ [x_0,\infty ) $.
\[rem: one-submart\] Let $ X^{(1)} $ and $ X^{(2)} $ be two random variables taking values in $ {\ensuremath{\mathbb{R}}}$. We write $$\begin{aligned}
X^{(1)} \le_{\rm icx} X^{(2)}
\label{}\end{aligned}$$ if $$\begin{aligned}
E[\varphi(X^{(1)})] \le E[\varphi(X^{(2)})]
\quad
\text{for all increasing convex function $ \varphi $}.
\label{}\end{aligned}$$ The relation $ \le_{\rm icx} $ is a partial order on the class of random variables, so that it is called [*increasing convex order*]{} (see Shaked–Shanthikumar [@MR2265633]). It is known (Kellerer [@MR0356250]) that a process $ (S_t)_{t \ge 0} $ is increasing in increasing convex order if and only if $ (S_t)_{t \ge 0} $ has the same one-dimensional marginals with a submartingale. Interested readers are referred to Rothschild–Stiglitz ([@MR0503565],[@MR0503567]), Baker–Yor [@MR2519530], and also Hirsch–Yor [@MR2571849].
Profeta–Roynette–Yor [@MR2582990] proved that a Bessel process admits pseudo-inverse if and only if the dimension is greater than one, and investigated several remarkable properties of its pseudo-inverse. See also Yen–Yor [@YY] for another related study of Bessel process.
This paper is organized as follows. In Section \[sec: dis\], we discuss d-inverses of several classes of processes and study scaling limit theorems of d-inverses. In Section \[sec: sca\], we study the inverse problem of scaling limits of d-inverses.
Discussions on d-increasing processes {#sec: dis}
=====================================
For two random variables $ X $ and $ Y $, we write $ X {\stackrel{{\rm d}}{=}}Y $ if $ P(X \le x) = P(Y \le x) $ for all $ x \in {\ensuremath{\mathbb{R}}}$. For a family of random variables $ (X^{(a)})_{a \in I} $ indexed by an interval $ I $ of $ {\ensuremath{\mathbb{R}}}$, we write $ X^{(a)} {\stackrel{{\rm d}}{\longrightarrow}}X $ as $ a \to b \in I $ for a random variable $ X $ if $ P(X^{(a)} \le x) \to P(X \le x) $ as $ a \to b $ for all $ x \in {\ensuremath{\mathbb{R}}}$ such that $ P(X=x)=0 $.
Transformations by increasing functions
---------------------------------------
For an increasing function $ f:I \to[-\infty ,\infty ] $ defined on an subinterval $ I $ on $ {\ensuremath{\mathbb{R}}}$, we denote its left-continuous inverse by $ f^{-1}:{\ensuremath{\mathbb{R}}}\to [-\infty ,\infty ] $, i.e.: $$\begin{aligned}
f^{-1}(y)
=& \inf \{ x \in I : f(x) \ge y \}
\label{} \\
=& \sup \{ x \in I : f(x) < y \} ,
\label{}\end{aligned}$$ where we adopt the usual convention that $ \inf \emptyset = \sup I $ and $ \sup \emptyset = \inf I $. By definition, we see that $$\begin{aligned}
f(x) \ge y \ \text{implies} \ x \ge f^{-1}(y) ,
\label{} \\
f(x) < y \ \text{implies} \ x \le f^{-1}(y) .
\label{}\end{aligned}$$
As a general remark, we give the following theorem.
\[thm1\] Let $ X=(X_t)_{t \ge 0} $ be a stochastic process such that $ X_t \in [x_0,\infty ) $ almost surely for all $ t \ge 0 $. Let $ f:[x_0,\infty ) \to {\ensuremath{\mathbb{R}}}$ and $ g:[0,\infty ) \to [0,\infty ) $ be continuous increasing functions. Suppose that $ X $ admits a d-inverse $ (Y_x)_{x \ge x_0} $. Then $ {\widehat}{X} = ({\widehat}{X}_t)_{t \ge 0} $ defined by $$\begin{aligned}
{\widehat}{X}_t = f {\left( X_{g(t)} \right)}
, \quad t \ge 0
\label{}\end{aligned}$$ admits a d-inverse $ {\left( g^{-1} {\left( Y_{f^{-1}(y)} \right)} \right)}_{y \ge f(x_0)} $.
Since $ f $ is continuous and increasing, we see that $ f(f^{-1}(y))=y $, and hence that $ f(x) \ge y $ if and only if $ x \ge f^{-1}(y) $. This proves that $$\begin{aligned}
P(f(X_{g(t)}) \ge y)
=& P(X_{g(t)} \ge f^{-1}(y))
\label{} \\
=& P(Y_{f^{-1}(y)} \le g(t))
\label{} \\
=& P {\left( g^{-1}(Y_{f^{-1}(y)}) \le t \right)} .
\label{}\end{aligned}$$ The proof is complete.
Brownian motion with functional drift
-------------------------------------
\[thm2\] Let $ \rho:[0,\infty ) \to {\ensuremath{\mathbb{R}}}$ be a right-continuous function. Then the process $ B^{(\rho)}_t = B_t + \rho(t) $ is d-increasing on $ [0,\infty ) $ if and only if the following condition is satisfied: $$\begin{aligned}
\text{\bf (A)} \quad
\frac{\rho(t)}{\sqrt{t}} \ \text{is increasing in $ t>0 $}.
\label{}\end{aligned}$$ In this case, the d-inverse $ (Y^{(\rho)}_x)_{x \ge 0} $ is given by $$\begin{aligned}
Y^{(\rho)}_x {\stackrel{{\rm d}}{=}}\eta_x^{-1}(B_1)
\quad \text{for all $ x \ge 0 $},
\label{}\end{aligned}$$ where $ \eta:(0,\infty ) \to {\ensuremath{\mathbb{R}}}$ is the increasing function defined by $$\begin{aligned}
\eta_x(t) = \frac{ \rho(t)-x }{\sqrt{t}}
, \quad t > 0 .
\label{eq: Fx}\end{aligned}$$
Since $ B_t {\stackrel{{\rm d}}{=}}- \sqrt{t} B_1 $, we have $$\begin{aligned}
P {\left( B^{(\rho)}_t \ge x \right)}
= P {\left( B_1 \le \eta_x(t) \right)} ,
\label{}\end{aligned}$$ where $ \eta_x $ is defined as . Now $ B^{(\rho)} $ is d-increasing if and only if $ \eta_x(t) $ is increasing in $ t>0 $ for all $ x \ge 0 $, which is equivalent to the condition [**(A)**]{}.
In the remainder of this section, we discuss several particular classes of Browinan motion with functional drifts.
Brownian motion with explosion
------------------------------
Using $ B_t {\stackrel{{\rm d}}{=}}\sqrt{t} B_1 $, we obtain the following: The Brownian motion without drift, $ B=B^{(0)} $, admits a d-inverse $ Y^{(0)}=(Y_x^{(0)})_{x \ge 0} $. In fact, we have $$\begin{aligned}
Y^{(0)}_x {\stackrel{{\rm d}}{=}}{\left( \frac{x}{B_1} \right)}^2 1_{\{ B_1>0 \}}
+ \infty 1_{\{ B_1 \le 0 \}}
, \quad x \ge 0 .
\label{}\end{aligned}$$
For a constant $ t_0 \in (0,\infty ) $, the process $ X=(X_t)_{t \ge 0} $ taking values in $ (-\infty ,\infty ] $ defined by $$\begin{aligned}
X_t = B_t + \infty 1_{\{ t \ge t_0 \}}
, \quad t \ge 0
\label{}\end{aligned}$$ is called [*Brownian motion with explosion in finite time*]{}. It admits a d-inverse $ Y=(Y_x)_{x \ge 0} $ given by $$\begin{aligned}
Y_x {\stackrel{{\rm d}}{=}}\min {\left\{ Y^{(0)}_x , t_0 \right\}}
, \quad x \ge 0 .
\label{}\end{aligned}$$
Let $ \rho:[0,\infty ) \to (0,\infty ) $ be a right-continuous function satisfying the condition [**(A)**]{}. Let $ \phi_1,\phi_2:[0,\infty ) \to [0,\infty ) $ be two functions. Suppose that there exist constants $ t_0 \in (0,\infty ] $ and $ p \in [0,\infty) $ such that $$\begin{aligned}
\text{\bf (B)} \quad
\begin{cases}
\displaystyle \Bigg.
\frac{\phi_1(\lambda) \rho(\lambda t)}{\sqrt{\lambda t}}
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}}
\begin{cases}
0 & \text{if $ 0<t<t_0 $}, \\
\infty & \text{if $ t>t_0 $},
\end{cases}
\\
\displaystyle \Bigg.
\frac{\phi_2(\lambda)}{\sqrt{\lambda }}
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} p .
\end{cases}
\label{}\end{aligned}$$ Then, for any $ x \ge 0 $, it holds that $$\begin{aligned}
\frac{1}{\lambda} Y^{(\phi_1(\lambda) \rho)}_{\phi_2(\lambda) x}
{\stackrel{{\rm d}}{\longrightarrow}}\min {\left\{ Y^{(0)}_{px} , t_0 \right\}}
\quad \text{as $ \lambda \to 0+ $}.
\label{eq: sc lim2}\end{aligned}$$ In particular, for any $ \lambda > 0 $, it holds that $$\begin{aligned}
\frac{1}{\lambda} \min {\left\{ Y^{(0)}_{\sqrt{\lambda} x} , t_0 \right\}}
{\stackrel{{\rm d}}{=}}\min {\left\{ Y^{(0)}_x , t_0 \right\}} .
\label{eq: sc inv2}\end{aligned}$$
Since $ B_{\lambda t} {\stackrel{{\rm d}}{=}}\sqrt{\lambda} B_t $, we have $$\begin{aligned}
P {\left( \frac{1}{\lambda}
Y^{(\phi_1(\lambda) \rho)}_{\phi_2(\lambda) x} \le t \right)}
=&
P {\left( B_{\lambda t} + \phi_1(\lambda) \rho(\lambda t) \ge \phi_2(\lambda) x \right)}
\label{} \\
=&
P {\left( B_t + \frac{\phi_1(\lambda) \rho(\lambda t)}{\sqrt{\lambda}}
\ge \frac{\phi_2(\lambda)}{\sqrt{\lambda}} x \right)} .
\label{}\end{aligned}$$ The last quantity converges as $ \lambda \to 0+ $ to $ P(B_t \ge px) $ if $ t<t_0 $ and to $ 1 $ if $ t > t_0 $. Since we have $$\begin{aligned}
P {\left( \min {\left\{ Y^{(0)}_{px} , t_0 \right\}} \le t \right)}
=
\begin{cases}
P(B_t \ge px) & \text{if $ t<t_0 $}, \\
1 & \text{if $ t \ge t_0 $},
\end{cases}
\label{}\end{aligned}$$ we obtain . The scale invariance property is obvious. The proof is now complete.
Brownian motion with constant drift
-----------------------------------
By Theorem \[thm2\], we see that the Brownian motion with constant drift $ B^{(c \cdot)} = (B_t + ct)_{t \ge 0} $ admits a d-inverse $ Y^{(c \cdot)}=(Y_x^{(c \cdot)})_{x \ge 0} $ if and only if $ c \in [0,\infty) $. If $ c \in (0,\infty) $, i.e., except for the Brownian case, we obtain, for $ x \ge 0 $, $$\begin{aligned}
Y_x^{(c \cdot)} {\stackrel{{\rm d}}{=}}{\left( \frac{B_1 + \sqrt{B_1^2 + 4cx}}{2c} \right)}^2 .
\label{}\end{aligned}$$ We remark that, for any $ x \ge 0 $, $$\begin{aligned}
Y_x^{(c \cdot)} {\stackrel{{\rm d}}{\longrightarrow}}Y_x^{(0)}
\quad \text{as $ c \to 0+ $}.
\label{}\end{aligned}$$ We also remark the following: Using $ B_t {\stackrel{{\rm d}}{=}}- t B_{1/t} $, we can easily see that $$\begin{aligned}
Y^{(c \cdot)}_x {\stackrel{{\rm d}}{=}}\frac{1}{Y^{(x \cdot)}_c}
\quad \text{for all $ c \ge 0 $ and $ x \ge 0 $}.
\label{}\end{aligned}$$
Scaling property of Brownian motion with constant drifts will be discussed in the next section in a more general setting.
The geometric Brownian motion $ S=S^{(\sigma,\mu)} $ with constant volatility $ \sigma > 0 $ and drift $ \mu \in {\ensuremath{\mathbb{R}}}$ given as may be represented as $ S^{(\sigma,\mu)}_t = f(B^{(({\widetilde}{\mu}/\sigma) t)}_t) $ where $ f(x) = s_0 \exp(\sigma x) $. Hence we may apply Theorem \[thm1\] and obtain the following: $ S^{(\sigma,\mu)} $ admits a d-inverse $ (T^{(\sigma,\mu)}_s)_{s \ge s_0} $ if and only if $ {\widetilde}{\mu} = \mu - \sigma^2/2 \ge 0 $. In this case, we have $$\begin{aligned}
T^{(\sigma,\mu)}_s
{\stackrel{{\rm d}}{=}}Y^{(({\widetilde}{\mu}/\sigma) \cdot)}_{f^{-1}(s)}
\quad \text{for all $ s \ge s_0 $}.
\label{}\end{aligned}$$
Brownian motion with power drift
--------------------------------
For $ \alpha \in [0,\infty) $ and $ c \in [0,\infty ) $, we define $$\begin{aligned}
R^{(c,\alpha )}_t = B_t + ct^{\alpha }
, \quad t \ge 0
\label{}\end{aligned}$$ and we call $ R^{(c,\alpha )} = (R^{(c,\alpha )}_t)_{t \ge 0} $ a [*Brownian motion with power drift*]{}. By Theorem \[thm2\], we see that $ R^{(c,\alpha )} $ admits a d-inverse $ (Z^{(c,\alpha )}_x)_{x \ge 0} $ if and only if $ \alpha \ge 1/2 $.
The following theorem tells us that the class of the d-inverses of Brownian motion with power drifts appear as scaling limits, and consequently, satisfy scale invariance property.
\[thm3\] Let $ \rho:[0,\infty ) \to (0,\infty ) $ be a right-continuous function satisfying the condition [**(A)**]{}. Let $ \phi_1,\phi_2:[0,\infty ) \to [0,\infty ) $ be two functions. Suppose there exist $ \alpha \ge 1/2 $, $ c\in (0,\infty) $ and $ p \in [0,\infty) $ such that $$\begin{aligned}
\text{\bf (RV)} \quad
\begin{cases}
\displaystyle \Bigg.
\frac{\rho(\lambda t)}{\rho(\lambda)} {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} t^{\alpha } , \\
\displaystyle \Bigg.
\frac{\rho(\lambda)}{\sqrt{\lambda}} \phi_1(\lambda) {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} c , \\
\displaystyle \Bigg.
\frac{1}{\sqrt{\lambda}} \phi_2(\lambda) {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} p .
\end{cases}
\label{eq: rv}\end{aligned}$$ Then, for any $ x \ge 0 $, it holds that $$\begin{aligned}
\frac{1}{\lambda} Y^{(\phi_1(\lambda) \rho)}_{\phi_2(\lambda) x}
{\stackrel{{\rm d}}{\longrightarrow}}Z^{(c,\alpha )}_{px}
\quad \text{as $ \lambda \to 0+ $}.
\label{eq: sc lim}\end{aligned}$$ In particular, for any $ \lambda > 0 $, it holds that $$\begin{aligned}
\frac{1}{\lambda} Z^{{\left( c \lambda^{(1/2)-\alpha },\alpha \right)}}_{\sqrt{\lambda} x}
{\stackrel{{\rm d}}{=}}Z^{(c,\alpha )}_x .
\label{eq: sc inv}\end{aligned}$$
The condition [**(RV)**]{} asserts that the functions $ \rho $, $ \phi_1 $ and $ \phi_2 $ (if $ p\in (0,\infty) $) are regularly varying at $ 0+ $ of index $ \alpha $, $ (1/2)-\alpha $, and $ 1/2 $, respectively.
Since $ B_{\lambda t} {\stackrel{{\rm d}}{=}}\sqrt{\lambda} B_t $, we have $$\begin{aligned}
P {\left( \frac{1}{\lambda}
Y^{(\phi_1(\lambda) \rho)}_{\phi_2(\lambda) x} \le t \right)}
=&
P {\left( B_t + \frac{\rho(\lambda)}{\sqrt{\lambda}} \phi_1(\lambda)
\cdot \frac{\rho(\lambda t)}{\rho(\lambda)} \ge \frac{\phi_2(\lambda)}{\sqrt{\lambda}} x \right)}
\label{} \\
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}}&
P(B_t + c t^{\alpha } \ge px)
\label{} \\
=& P(Z^{(c,\alpha )}_{px} \le t) .
\label{}\end{aligned}$$ Now we have obtained . The scale invariance property is obvious. The proof is complete.
Scaling limits for the class of d-inverses {#sec: sca}
==========================================
In what follows, by [*measurable*]{} we mean Lebesgue measurable.
\[thm4\] Let $ \rho:[0,\infty ) \to [0,\infty ) $ be a right-continuous function satisfying the condition [**(A)**]{}. Suppose that, for some measurable functions $ \phi_1,\phi_2:(0,\infty ) \to (0,\infty ) $ and for some family $ Z=(Z_x)_{x \ge 0} $ of $ [0,\infty ] $-valued random variables, it holds that $$\begin{aligned}
\frac{1}{\lambda} Y^{{\left( \phi_1(\lambda) \rho \right)}}_{\phi_2(\lambda) x}
{\mathrel{\mathop{\longrightarrow}\limits^{\rm d}_{\lambda \to 0+}}}
Z_x
\quad \text{for all $ x \ge 0 $}.
\label{eq: ass}\end{aligned}$$ Suppose, moreover, that there exists a constant $ p \in [0,\infty) $ such that $$\begin{aligned}
\frac{\phi_2(\lambda)}{\sqrt{\lambda}} {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} p .
\label{eq: ass2}\end{aligned}$$ Then either one of the following four assertions holds:
1. $ \phi_1(\lambda) \rho(\lambda t)/\sqrt{\lambda} {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} 0 $ for all $ t>0 $. In this case, $$\begin{aligned}
Z_x {\stackrel{{\rm d}}{=}}Y^{(0)}_{px}
\quad \text{for all $ x \ge 0 $}.
\label{eq: 0 infty limit3-}\end{aligned}$$
2. The condition [**(B)**]{} holds for some $ t_0 \in (0,\infty ) $. In this case, $$\begin{aligned}
Z_x {\stackrel{{\rm d}}{=}}\min {\left\{ Y^{(0)}_{px} , t_0 \right\}}
\quad \text{for all $ x \ge 0 $}.
\label{eq: 0 infty limit3}\end{aligned}$$
3. The condition [**(RV)**]{} holds for some $ \alpha \ge 1/2 $ and $ c\in (0,\infty) $. In this case, $$\begin{aligned}
Z_x {\stackrel{{\rm d}}{=}}Z^{(c,\alpha )}_{px}
\quad \text{for all $ x \ge 0 $}.
\label{}\end{aligned}$$
4. (Degenerate case.) $ P(Z_x=0)=1 $ for all $ x\in (0,\infty) $.
Let $ x \ge 0 $. Denote $ F_x(t) = P(Z_x \le t) $ for $ t \ge 0 $ and denote by $ C(F_x) $ the set of continuity point of $ F_x $. We note that $$\begin{aligned}
P {\left( \frac{1}{\lambda} Y^{{\left( \phi_1(\lambda) \rho \right)}}_{\phi_2(\lambda) x} \le t \right)}
=& P {\left( B_{\lambda t} + \phi_1(\lambda) \rho(\lambda t) \ge \phi_2(\lambda) x \right)}
\label{} \\
=& P {\left( B_1 + \phi_1(\lambda) \frac{\rho(\lambda t)}{\sqrt{\lambda t}}
\ge \frac{\phi_2(\lambda)}{\sqrt{\lambda t}} x \right)} .
\label{}\end{aligned}$$ By the assumption , we see that $$\begin{aligned}
\begin{split}
P {\left( B_1 + \phi_1(\lambda) \frac{\rho(\lambda t)}{\sqrt{\lambda t}}
- \frac{\phi_2(\lambda)}{\sqrt{\lambda t}} x \in [0,\infty) \right)}
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}}
P(Z_x \le t)
\\
\quad \text{for all $ t \in C(F_x) \cap (0,\infty ) $}.
\end{split}
\label{eq: psi1 rho limit-}\end{aligned}$$ Hence there exists a function $ g_x: C(F_x) \cap (0,\infty ) \to [-\infty ,\infty ] $ such that $$\begin{aligned}
\phi_1(\lambda) \frac{\rho(\lambda t)}{\sqrt{\lambda t}}
- \frac{\phi_2(\lambda)}{\sqrt{\lambda t}} x
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}}
g_x(t)
\quad \text{for all $ t \in C(F_x) \cap (0,\infty ) $}.
\label{eq: psi1 rho limit}\end{aligned}$$ Since $ \rho $ satisfies the condition [**(A)**]{} and since $ C(F_x) $ is dense in $ {\ensuremath{\mathbb{R}}}$, we see that $ g_x $ is increasing, and hence we may extend $ g_x $ on $ [0,\infty ) $ so that it is right-continuous. Now we obtain, for any $ x \ge 0 $, $$\begin{aligned}
Z_x {\stackrel{{\rm d}}{=}}g_x^{-1}(B_1) .
\label{eq: Zx dist}\end{aligned}$$
Let us write $ g $ simply for $ g_0 $. Noting that $ g $ is an increasing function taking values in $ [0,\infty ] $. we divide into the following four distinct cases.
\(i) [*The case where $ g(t)=0 $ for all $ t>0 $.*]{}
Let $ x \ge 0 $ be fixed. By the assumption and by , we obtain $$\begin{aligned}
g_x(t) = - px/\sqrt{t} , \quad t>0 .
\label{}\end{aligned}$$ From this and , we obtain $$\begin{aligned}
P(Z_x \le t) = P(Y^{(0)}_{px} \le t)
, \quad t>0 .
\label{}\end{aligned}$$ This proves . The proof of Claim (i) is now complete.
\(ii) [*The case where there exist a point $ t_0 \in (0,\infty ) $ such that $$\begin{aligned}
g(t)
\begin{cases}
= 0 & \text{if $ 0<t<t_0 $}, \\
= \infty & \text{if $ t>t_0 $}.
\end{cases}
\label{}\end{aligned}$$*]{}
Let $ x \ge 0 $. By the assumption and by , we obtain $$\begin{aligned}
g_x(t) =
\begin{cases}
- px/\sqrt{t} & \text{if $ 0<t<t_0 $}, \\
\infty & \text{if $ t>t_0 $}.
\end{cases}
\label{}\end{aligned}$$ From this and , we obtain $$\begin{aligned}
P(Z_x \le t) =
\begin{cases}
P(Y^{(0)}_{px} \le t) & \text{if $ 0 \le t < t_0 $}, \\
1 & \text{if $ t \ge t_0 $}.
\end{cases}
\label{}\end{aligned}$$ This proves . The proof of Claim (ii) is now complete.
\(iii) [*The case where there are two points $ t_0 , t_1 \in C(F_0) \cap (0,\infty ) $ with $ t_0<t_1 $ such that $ 0 < g(t_0) \le g(t_1) < \infty $.*]{}
Since $ g $ is increasing, we see that $$\begin{aligned}
0<g(t)<\infty \quad \text{for all $ t \in C(F_0) \cap [t_0,t_1] $}.
\label{}\end{aligned}$$ By , we have, for any $ t \in C(F_0) \cap [t_0,t_1] $, $$\begin{aligned}
\frac{\rho(\lambda t)}{\rho(\lambda t_0)}
= \frac{\phi_1(\lambda) \frac{\rho(\lambda t)}{\sqrt{\lambda t}} }
{\phi_1(\lambda) \frac{\rho(\lambda t_0)}{\sqrt{\lambda t_0}} }
\cdot \frac{\sqrt{t}}{\sqrt{t_0}}
{\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}}
\frac{g(t)}{g(t_0)} \cdot \frac{\sqrt{t}}{\sqrt{t_0}} \in (0,\infty ) .
\label{eq: rho ratio lim}\end{aligned}$$ Since $ C(F_0) \cap [t_0,t_1] $ has positive Lebesgue measure, we may apply Characterisation Theorem ([@MR898871 Theorem 1.4.1]) to see that the convergence and consequently are still valid for all $ t \in (0,\infty ) $, and that $$\begin{aligned}
\frac{g(t)}{g(t_0)} \cdot \frac{\sqrt{t}}{\sqrt{t_0}} = t^{\alpha }
, \quad t \in (0,\infty )
\label{}\end{aligned}$$ for some $ \alpha \in {\ensuremath{\mathbb{R}}}$. Since $ g $ is increasing, we have $ \alpha \ge 1/2 $. We obtain $$\begin{aligned}
g(t) = c t^{\alpha -1/2}
, \quad t \in (0,\infty )
\label{}\end{aligned}$$ for some $ c \in (0,\infty ) $. Hence, by and , we obtain $$\begin{aligned}
\frac{\rho(\lambda t)}{\rho(\lambda)} {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} t^{\alpha }
\quad \text{and} \quad
\frac{\rho(\lambda)}{\sqrt{\lambda}} \phi_1(\lambda) {\mathrel{\mathop{\longrightarrow}\limits^{}_{\lambda \to 0+}}} c .
\label{}\end{aligned}$$ Now we have seen that the condition [**(RV)**]{} is satisfied. The proof of Claim (iii) is now completed by Theorem \[thm3\].
\(iv) [*The case where $ g(t) = \infty $ for all $ t>0 $.*]{}
In this case, by the assumption and by , we obtain $ g_x(t)=\infty $ for all $ t>0 $ and $ x \ge 0 $. By , we obtain $ P(Z_x = 0) = 1 $ for all $ x \ge 0 $. The proof of Claim (iv) is now complete.
[10]{}
Baker, D. and Yor, M.: A [B]{}rownian sheet martingale with the same marginals as the arithmetic average of geometric [B]{}rownian motion, (2009) 1532–1540.
Bingham, N. H., Goldie, C. M. and Teugels, J. L.: , Encyclopedia of Mathematics and its Applications [**27**]{}, Cambridge University Press, Cambridge, 1987.
Henderson, V. and Hobson, D.: Local time, coupling and the passport option, (2000) 69–80.
Henderson, V. and Hobson, D.: Coupling and option price comparisons in a jump-diffusion model, (2003) 79–101.
Hirsch, F. and Yor, M.: A construction of processes with one dimensional martingale marginals, based upon path-space [O]{}rnstein-[U]{}hlenbeck processes and the [B]{}rownian sheet, (2009) 389–417.
Hobson, D.:Comparison results for stochastic volatility models via coupling, (2010) 129–152.
Hobson, D. G.: Volatility misspecification, option pricing and superreplication via coupling, (1998) 193–205.
Kellerer, H. G.: Markov-[K]{}omposition und eine [A]{}nwendung auf [M]{}artingale, (1972) 99–122.
Kijima, M.: Monotonicity and convexity of option prices revisited, (2002) 411–425.
Madan, D., Roynette, B. and Yor, M.: Option prices as probabilities, (2008) 79–87.
Madan, D., Roynette, B. and Yor, M.: Put option prices as joint distribution functions in strike and maturity: the [B]{}lack-[S]{}choles case, (2009) 1075–1090.
Profeta, C., Roynette, B. and Yor, M.: , Springer Finance, Springer-Verlag, Berlin, 2010.
Rothschild, M. and Stiglitz, J. E.: Increasing risk. [I]{}. [A]{} definition, (1970) 225–243.
Rothschild, M. and Stiglitz, J. E.: Increasing risk. [II]{}. [I]{}ts economic consequences, (1971) 66–84.
Shaked, M. and Shanthikumar, J. G.: , Springer Series in Statistics, Springer, New York, 2007.
Yen, J.-Y. and Yor, M.: Call option prices based on bessel processes, , to appear.
[^1]: Graduate School of Science, Kobe University, Kobe, JAPAN.
[^2]: The research of this author was supported by KAKENHI (20740060).
[^3]: Current affiliation: FUJITSU Kansai Systems Ltd.
|
---
abstract: |
The brain processes information about the environment via neural codes. The neural ideal was introduced recently as an algebraic object that can be used to better understand the combinatorial structure of neural codes. Every neural ideal has a particular generating set, called the canonical form, that directly encodes a minimal description of the receptive field structure intrinsic to the neural code. On the other hand, for a given monomial order, any polynomial ideal is also generated by its unique (reduced) Gröbner basis with respect to that monomial order. How are these two types of generating sets – canonical forms and Gröbner bases – related? Our main result states that if the canonical form of a neural ideal is a Gröbner basis, then it is the universal Gröbner basis (that is, the union of all reduced Gröbner bases). Furthermore, we prove that this situation – when the canonical form is a Gröbner basis – occurs precisely when the universal Gröbner basis contains only pseudo-monomials (certain generalizations of monomials). Our results motivate two questions: (1) When is the canonical form a Gröbner basis? (2) When the universal Gröbner basis of a neural ideal is [*not*]{} a canonical form, what can the non-pseudo-monomial elements in the basis tell us about the receptive fields of the code? We give partial answers to both questions. Along the way, we develop a representation of pseudo-monomials as hypercubes in a Boolean lattice.
[**Keywords:**]{} neural code, receptive field, canonical form, Gröbner basis, Boolean lattice
[**MSC classes:**]{} 92-04 (Primary), 13P25, 68W30 (Secondary)
address:
- 'Department of Mathematics and Statistics, Sam Houston State University, Huntsville, TX 77341-2206'
- 'Mathematics Department, Central College, Pella, IA 50219'
- 'Department of Mathematics, Bard College, Annandale, NY 12504'
- 'Department of Mathematics, Willamette University, Salem, OR 97301'
- 'Department of Mathematics, Rose-Hulman Institute of Technology, Terre Haute, IN 47803'
- 'Department of Mathematics, St. Edward’s University, Austin, Texas 78704-6489'
- 'Department of Mathematics, Texas A&M University, College Station, TX 77843'
author:
- Rebecca Garcia
- Luis David García Puente
- Ryan Kruse
- Jessica Liu
- Dane Miyata
- Ethan Petersen
- Kaitlyn Phillipson
- Anne Shiu
bibliography:
- 'neuro.bib'
title: Gröbner Bases of Neural Ideals
---
Introduction {#sec:intro}
============
The brain is tasked with many important functions, but one of the least understood is how it builds an understanding of the world. Stimuli in one’s environment are not experienced in isolation, but in relation to other stimuli. How does the brain represent this organization? Or, to quote from Curto, Itskov, Veliz-Cuba, and Youngs, “What can be inferred about the underlying stimulus space from neural activity alone?” [@neural_ring]. Curto [*et al.*]{} pursued this question for codes where each neuron has a region of stimulus space, called its [*receptive field*]{}, in which it fires at a high rate. They introduced algebraic objects that summarize neural-activity data, which are in the form of [*neural codes*]{} ($0/1$-vectors where $1$ means the corresponding neuron is active, and $0$ means silence) [@neural_ring]. The [*neural ideal*]{} of a neural code is an ideal that contains the full combinatorial data of the code. The [*canonical form*]{} of a neural ideal is a generating set that is a minimal description of the receptive-field structure. Hence, the questions posed above have been investigated via the neural ideal or the canonical form [@what-makes; @neural_ring; @neural-hom; @new-alg]. As a complement to algebraic approaches, combinatorial and topological arguments are employed in related works [@intersection-complete; @sparse; @LSW]. The aim of our work is to investigate, for the first time, how the canonical form is related to other generating sets of the neural ideal, namely, its Gröbner bases. This is a natural mathematical question, and additionally the answer could improve algorithms for computing the canonical form. Currently, there are two distinct methods to compute the canonical form of a neural ideal: the original method proposed in [@neural_ring] and an iterative method introduced in [@neural-ideal-sage]. The former method requires the computation of primary decomposition of pseudo-monomial ideals. As a result, this method is rather inefficient. Even in dimension 5, one can find codes for which this algorithm takes hundreds or even thousands of seconds to terminate or halts due to lack of memory. The more recent iterative method relies entirely on basic polynomial arithmetic. This algorithm can efficiently compute canonical forms for codes in up to 10 dimensions; see [@neural-ideal-sage]. On the other hand, Gröbner basis computations are generally computationally expensive. Nevertheless, we take full advantage of tailored methods for Gröbner basis over Boolean rings [@polybori]. As we show later in Table \[runtimes\], for small dimensions less than or equal to 8, Gröbner basis computations are faster than canonical form ones. For larger dimensions, we have observed that in general Gröbner basis computations are faster but the standard deviation on computational time is much larger. In dimension 9, the average time to compute a Gröbner basis is around 3 seconds, but there are codes for which that computation takes close to 10 hours to finish.
Nevertheless, we believe that a thorough study of Gröbner basis of neural ideals is not only of theoretical interest, but it can lead to better procedures able to perform computations in larger dimensions. Indeed, among small codes, surprisingly many have canonical forms that are also Gröbner bases. Moreover, the iterative nature of the newer canonical form algorithm hints towards the ability to compute canonical forms and Gröbner bases of neural codes in large dimensions by ‘gluing’ those of codes on small dimensions. Such decomposition results are a common theme in other areas of applied algebraic geometry [@AR08; @EKS14].
The outline of this paper is as follows. Section \[sec:background\] provides background on neural ideals, canonical forms, and Gröbner bases. In Section \[sec:main\], we prove our main result: if the canonical form of a neural ideal is a Gröbner basis, then it is the universal Gröbner basis (Theorem \[thm:main\]). We also prove a partial converse: if the universal Gröbner basis of a neural ideal contains only so-called pseudo-monomials, then it is the canonical form (Theorem \[thm:summary\]). Our results motivate other questions:
1. When is the canonical form a Gröbner basis?
2. If the universal Gröbner basis of a neural ideal is [*not*]{} a canonical form, what can the non-pseudo-monomial elements in the basis tell us about the receptive fields of the code?
Sections \[sec:gb\] and \[sec:new-RF\] provide some partial answers these questions. Finally, a discussion is in Section \[sec:discussion\].
Background {#sec:background}
==========
This section introduces neural ideals and related topics, which were first defined by Curto, Itskov, Veliz-Cuba, and Youngs [@neural_ring], and recalls some basics about Gröbner bases. We use the notation $[n]:=\{1,2,\dots, n\}$.
Neural codes and receptive fields {#sec:code}
---------------------------------
A [**neural code**]{} (also known as a **combinatorial code**) on $n$ neurons is a set of binary firing patterns $C\subset \{0,1\}^n$, that is, a set of binary strings of neural activity. Note that neither timing nor rate of neural activity are recorded in a neural code.
An element $c\in C$ of a neural code is a [**codeword**]{}. Equivalently, a codeword is determined by the set of neurons that fire: $${\operatorname{supp}}(c):=\{i\in [n] \mid c_i=1\}\subseteq [n]~.$$ Thus, the entire code is identified with a set of subsets of co-firing neurons: ${\operatorname{supp}}(C) =\{{\operatorname{supp}}(c) \mid c\in C\} \subseteq 2^{[n]}.$
In many areas of the brain, neurons are associated with [**receptive fields**]{} in a [*stimulus space*]{}. Of particular interest are the receptive fields of *place cells*, which are neurons that fire in response to an animal’s location. More specifically, each place cell is associated with a [**place field**]{}, a convex region of the animal’s physical environment where the place cell has a high firing rate [@Oke1]. The discovery of place cells and related neurons (grid cells and head direction cells) won neuroscientists John O’Keefe, May Britt Moser, and Edvard Moser the 2014 Nobel Prize in Physiology and Medicine.
Given a collection of sets $\mathcal{U} = \{U_1,...,U_n\}$ in a stimulus space $X$ (here $U_i$ is the receptive field of neuron $i$), the [**receptive field code**]{}, denoted by $C(\mathcal{U})$, is: $$C(\mathcal{U})~:=~
\left\{c\in \{0,1\}^n ~:~ \left( \bigcap_{i\in {\operatorname{supp}}(c)} U_i\right) \setminus \left( \bigcup_{j \notin {\operatorname{supp}}(c)} U_j \right) \neq \emptyset
\right\}~.$$ As mentioned earlier, we often identify this code with the corresponding set of subsets of $[n]$. Also, we use the following convention for the empty intersection: $\bigcap_{i \in \emptyset} U_i := X$.
\[ex:first-example\] Consider the sets $U_i$ in a stimulus space $X$ depicted in Figure \[fig:U\]. The corresponding receptive field code is $C(\mathcal{U})=\{\emptyset, 1,123, 13, 3 \}$.
node\[below\] [$U_{1}$]{}; node [$U_{2}$]{}; node \[below\] [$U_{3}$]{}; (-2,-2) rectangle (4,2); (X) at (4.2,1.8) [$X$]{};
The neural ideal and its canonical form {#sec:CF}
---------------------------------------
A [**pseudo-monomial**]{} in $\mathbb{F}_2[x_1,\dots, x_n]$ is a polynomial of the form $$f~=~ \prod_{i\in \sigma} x_i\prod_{j\in\tau}(1+x_j)~,$$ where $\sigma,\tau\subseteq[n]$ with $\sigma\cap\tau=\emptyset$. Every term in a pseudo-monomial $f= \prod_{i\in \sigma} x_i\prod_{j\in\tau}(1+x_j)$ divides its highest-degree term, $\prod_{i\in \sigma \cup \tau} x_i$. We will use this fact several times in this work.
Each $v\in {\{0,1\}^n}$ defines a pseudo-monomial $\rho_v$ as follows: $$\begin{aligned}
\rho_v~:=~\prod_{i=1}^n(1-v_i-x_i)=\prod_{\{i \mid v_i=1\}}x_i\prod_{\{j \mid v_j=0\}}(1+x_j)=\prod_{\{i\in {\operatorname{supp}}(v)\}}x_i\prod_{\{j\not\in{\operatorname{supp}}(v)\}}(1-x_j)~.\end{aligned}$$ Notice that $\rho_v$ is the [**characteristic function**]{} for $v$, that is, $\rho_v(x)=1$ if and only if $x=v$.
Let $C \subseteq \{0,1\}^n$ be a neural code. The **neural ideal** $J_C$ is the ideal in $\mathbb{F}_2[x_1,\dots, x_n]$ generated by all $\rho_v$ for $v\not\in C$: $$\begin{aligned}
J_C~:=~\langle \{\rho_v|v\not\in C\}\rangle~.
\end{aligned}$$
It follows that the variety of the neural ideal is the code itself: $V(J_C)=C$. The following lemma provides the algebraic version of the previous statement:
\[lemma\_3.2\] Let $C \subseteq \{0,1\}^n$ be a neural code. Then $$I(C) ~=~ J_C + \langle x_i(1 + x_i) \mid i \in [n] \rangle~,$$ where $I(C)$ is the ideal of the subset $C \subseteq \{0,1\}^n$.
Note that the ideal generated by the Boolean relations $\langle x_i(1 + x_i):i\in [n]\rangle$ is contained in $I(C)$, regardless of the structure of $C$.
A pseudo-monomial $f$ in an ideal $J$ in ${\mathbb{F}_2[x_1,\dots,x_n]}$ is [**minimal**]{} if there does not exist another pseudo-monomial $g\in J$, with $g \neq f$, such that $f=gh$ for some $h\in {\mathbb{F}_2[x_1,\dots,x_n]}$.
The [**canonical form**]{} of a neural ideal $J_C$, denoted by ${\rm CF}(J_C)$, is the set of all minimal pseudo-monomials of $J_C$.
Algorithms for computing the canonical form were given in [@neural_ring; @neural-hom; @neural-ideal-sage]. In particular, [@neural-ideal-sage] describes an iterative method to compute the canonical form that is significantly more efficient than the original method presented in [@neural_ring].
The canonical form ${\rm CF}(J_C)$ is a particular generating set for the neural ideal $J_C$ [@neural_ring]. The main goal in this work is to compare ${\rm CF}(J_C)$ to other generating sets of $J_C$, namely, its Gröbner bases.
\[ex:first-example-part-2\] Returning to Example \[ex:first-example\], the codewords $v$ that are [*not*]{} in $C(\mathcal{U})= \{\emptyset, 1,123, 13, 3 \}$ are $2$, $12$, and $23$, so the neural ideal is $J_C=\langle\{ x_2(1+x_1)(1+x_3),~x_1x_2(1+x_3),~x_2x_3(1+x_1) \}\rangle $. The canonical form is ${\rm CF}(J_{C(\mathcal{U})} )=\{ x_2(1+x_1),~x_2(1+x_3) \}$. We will interpret these canonical-form polynomials in Example \[ex:first-example-part-3\] below.
Receptive-field relationships {#sec:RF}
-----------------------------
It turns out that we can interpret pseudo-monomials in $J_C$ (and thus in the canonical form) in terms of relationships among receptive fields. First we need the following notation: for any $\sigma\subseteq [n]$, define: $$x_\sigma ~:=~\prod_{i\in\sigma}x_i \quad
\text{ and } \quad
U_\sigma ~:=~ {\bigcap}_{i\in\sigma} U_i~,$$ where, by convention, the empty intersection is the entire space $X$.
\[lem:receptivefields\] Let $X$ be a stimulus space, let $\mathcal{U}=\{U_i\}_{i=1}^n$ be a collection of sets in $X$, and consider the receptive field code $C=C(\mathcal{U})$. Then for any pair of subsets $\sigma,\tau\subseteq [n]$, $$x_\sigma\prod_{i\in\tau}(1+x_i)\in J_C \iff U_\sigma\subseteq{\bigcup}_{i\in\tau}U_i~.$$
Thus, three types of receptive-field relationships (RF relationships) can be read off from pseudo-monomials in a neural ideal (e.g., those in the canonical form) [@neural_ring]:
- $x_\sigma \in J_C \iff U_\sigma = \emptyset $ (where $\sigma \neq \emptyset$).
- $x_\sigma\prod_{i\in\tau}(1+ x_i)\in J_C \iff U_\sigma\subseteq{\bigcup}_{i\in\tau}U_i$ (where $\sigma, \tau \neq \emptyset$).
- $\prod_{i\in\tau}(1+x_i)\in J_C \iff X\subseteq{\bigcup}_{i\in\tau}U_i$ (where $\tau \neq \emptyset$), and thus $X = {\bigcup}_{i\in\tau}U_i$.
\[ex:first-example-part-3\] The canonical form in Example \[ex:first-example-part-2\], which is $\{ x_2(1+x_1),~x_2(1+x_3) \}$, encodes two Type 2 relationships: $U_2 \subseteq U_1$ and $U_2 \subseteq U_3$. Indeed, we can verify this in Figure \[fig:U\].
In this work, we reveal more types of RF relationships, which arise from non-pseudo-monomials. They often appear in Gröbner bases of neural ideals (see Section \[sec:new-RF\]).
Gröbner bases {#sec:GB}
-------------
Here we recall some basics about Gröbner bases [@gb-book; @CLO-ideals; @cocoa].
Fix a monomial ordering $<$ of a polynomial ring $R=k[x_1,\dots, x_n]$ over a field $k$, and let $I$ be an ideal in $R$. Let $LT_<(I)$ denote the ideal generated by all leading terms, with respect to the monomial ordering $<$, of elements in $I$.
A [**Gröbner basis**]{} of $I$, with respect to $<$, is a finite subset of $I$ whose leading terms generate $LT_<(I)$.
One useful property of a [Gröbner basis]{}is that given a polynomial $f$ and a [Gröbner basis]{}$G$, the remainder of $f$ when divided by the set of elements in $G$ is uniquely determined. A Gröbner basis is [**reduced**]{} if (1) every $f \in G$ has leading coefficient 1, and (2) no term of any $f \in G$ is divisible by the leading term of any $g \in G$ for which $g\neq f$. For a given monomial ordering, the reduced [Gröbner basis]{}of an ideal is unique.
A **universal [Gröbner basis]{}** of an ideal $I$ is a [Gröbner basis]{}that is a [Gröbner basis]{}with respect to [*every*]{} monomial ordering. **The universal [Gröbner basis]{}** of an ideal $I$ is the union of all the reduced [Gröbner bases]{}of $I$.
*The* universal [Gröbner basis]{}is an instance of *a* universal [Gröbner basis]{}, given that the set of all distinct reduced [Gröbner bases]{}of an ideal $I$ is finite [@gb-book pg. 515]. This fact is actually the main result of the theory of *Gröbner fans* first introduced in [@gb-fan].
Main Result {#sec:main}
===========
In this section, we give the main result of our paper: if the canonical form is a Gröbner basis, then it is the universal Gröbner basis (Theorem \[thm:main\]). Beyond being a natural expansion of some of Curto [*et al.*]{}’s results [@neural_ring], our theorem is also of mathematical interest since there are few classes of ideals whose universal Gröbner bases are known. Indeed, such characterizations in general are known to be computationally difficult.
\[thm:main\] If the canonical form of a neural ideal $J_C$ is a Gröbner basis of $J_C$ with respect to some monomial ordering, then it is universal Gröbner basis of $J_C$.
The proof of Theorem \[thm:main\], which appears in Section \[sec:proof-main-theorem\], requires the following related results:
\[lem:pseudo\] For a pseudo-monomial $f = x_{\sigma} \prod_{j \in \tau} (1+x_j)$ in $\mathbb{F}_2[x_1, \dots, x_n]$, the leading term of $f$ with respect to [*any*]{} monomial ordering is its highest-degree term, $x_{\sigma\cup \tau}$.
This follows from the fact that every term of $f$ divides $x_{\sigma\cup \tau}$, and two properties of a monomial ordering [@CLO-ideals]: it is a well-ordering (so, $1<x_i$), and $x_{\alpha} < x_{\beta}$ implies $x_{\alpha\cup \gamma} < x_{\beta \cup \gamma}$.
\[prop:a-universal\] If the canonical form of a neural ideal $J_C$ is a Gröbner basis of $J_C$ with respect to some monomial ordering, then it is universal Gröbner basis of $J_C$.
Let $G$ denote the canonical form, and assume that $G$ is a Gröbner basis with respect to some monomial ordering $<_1$. Let $<_2$ denote another monomial ordering. As always, we have the containment ${\mathrm{LT}}_{<_2}(G) \subseteq {\mathrm{LT}}_{<_2}(J_C)$, which we must prove is an equality. Accordingly, let $f \in J_C$. We must show that ${\mathrm{LT}}_{<_2}(f) \in {\mathrm{LT}}_{<_2}(G)$. With respect to $<_1$, the reduction of $f$ by $G$ is 0, so we can write $f$ as a polynomial combination of some of the $g_i\in G$ in the following form: $$\begin{aligned}
\label{eq:f-sum-1}
f~=~
\frac{{\mathrm{LT}}_{<_1}(f)}{{\mathrm{LT}}(g_{1})}g_{1}+\frac{{\mathrm{LT}}_{<_1}(r_1)}{{\mathrm{LT}}(g_{2})}g_{2}+\dots+
\frac{{\mathrm{LT}}_{<_1}(r_{t-1})}{{\mathrm{LT}}(g_{t})}g_{t}
~=~h_1+\dots+h_t~,\end{aligned}$$ where (for $i=1,\dots, t$) we have $g_i \in G$, $h_i := \frac{{\mathrm{LT}}_{<_1}(r_{i-1})}{{\mathrm{LT}}(g_{i})}g_{i}$, $r_0:= f$, and $r_i=f-h_1-\dots-h_i$ is the remainder after the $i$-th division of $f$ by $G$. Note that in equation , the polynomial $g_i$ may appear multiple times, but this does not affect our arguments. By Lemma \[lem:pseudo\], the leading term of $g_i$ does not depend on the monomial ordering. Moreover, each $h_i$ is the product of a monomial and a pseudo-monomial, $g_i$, so by a straightforward generalization of Lemma \[lem:pseudo\], the leading term of $h_i$ with respect to [*any*]{} monomial ordering is ${\mathrm{LT}}_{<_1}(h_i)$. Also note that when dividing by the Gröbner basis $G$, ${\mathrm{LT}}_{<_1}(r_i) <_1 {\mathrm{LT}}_{<_1}(r_{i-1})$ so the ${\mathrm{LT}}_{<_1}(r_{i})$ are distinct. This implies that the ${\mathrm{LT}}_{<_1}(h_{i})$ are distinct since ${\mathrm{LT}}_{<_1}(h_{i}) = {\mathrm{LT}}_{<_1}(r_{i-1})$. Hence, among the list of monomials $\{{\mathrm{LT}}(h_{i})\}_{i=1}^t$, there is a unique largest monomial with respect to $<_2$, which we denote by ${\mathrm{LT}}(h_{i^*})$. Next, by examining the sum in , and noting that every term of $h_i$ divides the leading term of $h_i$, we see that ${\mathrm{LT}}_{<_2}(f)={\mathrm{LT}}(h_{i^*})$. Thus, because $g_{i^*}$ divides $h_{i^*}$, it follows that ${\mathrm{LT}}(g_{i^*})$ divides ${\mathrm{LT}}_{<_2}(f)$, and so, ${\mathrm{LT}}_{<_2}(f) \in {\mathrm{LT}}_{<_2}(G)$.
Thus, if the canonical form is a Gröbner basis with respect to [*some*]{} monomial ordering, then it is a Gröbner basis with respect to [*every*]{} monomial ordering.
Pseudo-monomials and hypercubes {#sec:hypercube}
-------------------------------
To prove our main result (Theorem \[thm:main\]), we need to develop the connection between pseudo-monomials and hypercubes in the Boolean lattice. The [**Boolean lattice**]{} on $[n]$ is the power set $P([n]):=2^{[n]}$, partially ordered by inclusion. Also, for $\sigma \subseteq [n]$, we let $P(\sigma)$ denote the power set of $\sigma$.
The [**support**]{} of a monomial $\prod_{i=1}^n x_i^{a_i}$ is the set $\{ i \in [n] \mid a_i>0 \}$.
\[def:hypercube\] Let $f= x_{\sigma} \prod_{j \in \tau} (1+x_j)$ be a pseudo-monomial in $\mathbb{F}_2[x_1, \dots, x_n]$. The [**hypercube of $f$**]{}, denoted by $H(f)$, is the sublattice of the Boolean lattice on $[n]$ formed by the support of each term of $f$.
\[rem:hypercube\] The hypercube of $f$ is the [*interval*]{} of the Boolean lattice from $\sigma$ to $\sigma \cup \tau$: $$H(f)=\{ \omega \mid \sigma \subseteq \omega \subseteq \sigma \cup \tau \} \subseteq P([n])~,$$ and thus its Hasse diagram is a hypercube (this justifies its name). This is because: $$f=x_{\sigma} \prod\limits_{j \in \tau} (1+x_j) = \sum\limits_{\{ \theta \mid \theta \subseteq \tau\} } x_{\sigma \cup \theta}~.$$
\[ex:hypercube\] Let $f=x_1x_2(1+x_3)(1+x_4)=x_1x_2x_3x_4+x_1x_2x_3+x_1x_2x_4+x_1x_2$. Figure \[fig:hypercube\] shows part of the Hasse diagram of $P([4])$, with the hypercube of $f$ indicated by circles and solid lines.
\(1234) at (0,0) [$1234$]{};
\(124) at (-1.25, .7\*[-1.3]{}) [$124$]{}; (134) at (1.25,.7\*[-1.3]{}) [$134$]{}; (123) at (-3.75, .7\*[-1.3]{}) [$123$]{}; (234) at (3.75, .7\*[-1.3]{})[$234$]{};
\(12) at (-5, 2\*[-1.3]{}) [$12$]{}; (14) at (-3, 2\*[-1.3]{}) [$13$]{}; (13) at (-1, 2\*[-1.3]{}) [$14$]{}; (23) at (1, 2\*[-1.3]{}) [$23$]{}; (24) at (3, 2\*[-1.3]{}) [$24$]{}; (34) at (5, 2\*[-1.3]{}) [$34$]{};
\(1) at (-3.8, 3\*[-1.3]{}) [$1$]{}; (2) at (-1.3, 3\*[-1.3]{}) [$2$]{}; (3) at (1.2, 3\*[-1.3]{}) [$3$]{}; (4) at (3.7, 3\*[-1.3]{}) [$4$]{};
(null) at (0, 3.7\*[-1.3]{}) [$\emptyset$]{};
\(1234) – (123) – (12) – (124) – (1234); (134) – (14) – (1) – (13) – (134); (234) – (23) – (2) – (24) – (234); (34) – (3) – (null) – (4) – (34) ;
\(12) – (1) – (null) – (2) – (12);
Via hypercubes, divisibility of pseudo-monomials has a nice geometric interpretation:
\[lem:hypdiv\] For pseudo-monomials $f={x_{\sigma}\prod_{j\in \tau}(1+x_j)}$ and $g={x_{\alpha}\prod_{j\in \beta}(1+x_j)}$, the following are equivalent:
1. $g|f$,
2. $\alpha\subseteq \sigma$ and $\beta \subseteq \tau$,
3. $H(g) \subseteq P(\sigma\cup\tau)$ and $ H(g) \cap P(\sigma) = \{\alpha\}$, and
4. $H(g) \subseteq P(\sigma\cup\tau)$ and $ \left| H(g) \cap P(\sigma) \right| = 1 $.
The implication (1) $\Leftarrow$ (2) is clear, and (1) $\Rightarrow$ (2) follows from the fact that ${\mathbb{F}_2[x_1,\dots,x_n]}$ is a unique factorization domain. For (2) $\Rightarrow$ (3), assume that $\alpha\subseteq \sigma$ and $\beta \subseteq \tau$. Then $H(g) \subseteq P(\alpha \cup \beta) \subseteq P(\sigma \cup \tau)$. So, we need only show that $H(g) \cap P(\sigma) = \{ \alpha \}$. To see this, we first recall: $$\begin{aligned}
\label{eq:hypercube-g}
H(g) ~=~ \{ \alpha \cup \theta \mid \theta \subseteq \beta \}\end{aligned}$$ from Remark \[rem:hypercube\]. Thus, $$\begin{aligned}
H(g) \cap P(\sigma)
~=~
\{ \alpha \cup \theta \mid \theta \subseteq \beta \text{ and } \theta \subseteq \sigma \}
~=~
\{\alpha\}~,\end{aligned}$$ where the second equality follows from hypotheses: $\alpha \subseteq \sigma$ and $\sigma \cap \beta \subseteq \sigma \cap \tau = \emptyset$ (because $\beta \subseteq \tau$).
\(3) $\Rightarrow$ (4) is clear, so we need only show (2) $\Leftarrow$ (4). Accordingly, suppose $H(g) \subseteq P(\sigma \cup \tau)$ and $I:=H(g) \cap P(\sigma)$ consists of only one element. We claim that this element is $\alpha$. Indeed, let $\omega \in I$ (i.e., $\omega \in H(g)$ and $\omega \subseteq \sigma$); then, $\alpha $ also is in $I$ (because $\alpha \in H(g)$ and $\alpha \subseteq \omega \subseteq \sigma$). So, $\alpha = \omega \subseteq \sigma$.
To complete the proof, we must show that $\beta \subseteq \tau$. To this end, let $k \in \beta$. Then $\alpha \cup \{k\}$ is in $H(g)$, by equation , so it is [*not*]{} in $P(\sigma)$ (because $H(g) \cap P(\sigma)= \{\alpha\}$). So, $k \in ( \beta \setminus \sigma )$. Finally, $( \beta \setminus \sigma ) \subseteq \tau$, because $\alpha \cup \beta \subseteq \sigma \cup \tau$ follows from the hypothesis $H(g) \subseteq P(\sigma \cup \tau)$. So, $k \in \tau$.
\[ex:continued\] We return to the pseudo-monomial $f=x_1x_2(1+x_3)(1+x_4)$, which we rewrite as $f=x_{\sigma} \prod_{j \in \tau} (1+x_j)$, where $\sigma=\{1,2\}$ and $\tau=\{3,4\}$. In Figure \[fig:hypercube\], $P(\sigma)=P([2])$ is marked by the dotted line. According to Lemma \[lem:hypdiv\], a pseudo-monomial $h$ divides $f$ if and only if the hypercube of $h$ satisfies two conditions: it includes a vertex from $P(\sigma)$, and it is contained within either the hypercube of $f$ or one of the dashed-line squares “parallel" to the hypercube of $f$ in Figure \[fig:hypercube\].
Multivariate division by pseudo-monomials {#sec:reduction}
-----------------------------------------
The following result concerns reducing a given pseudo-monomial by a set of pseudo-monomials.
\[thm:divbyset\] Consider a pseudo-monomial $f={x_{\sigma}\prod_{i\in \tau}(1+x_i)}\in {\mathbb{F}_2[x_1,\dots,x_n]}$, and let $G$ be a finite set of pseudo-monomials in ${\mathbb{F}_2[x_1,\dots,x_n]}$. If some remainder upon division of $f$ by $G$ is $0$ for some monomial ordering, then there exists $g \in G$ such that $g$ divides $f$.
Suppose that some remainder on division of $f$ by $G$ is 0: $$\begin{aligned}
\label{eq:f-sum}
f~=~
\frac{{\mathrm{LT}}(f)}{{\mathrm{LT}}(g_{1})}g_{1}+\frac{{\mathrm{LT}}(r_1)}{{\mathrm{LT}}(g_{2})}g_{2}+\dots+
\frac{{\mathrm{LT}}(r_{t-1})}{{\mathrm{LT}}(g_{t})}g_{t}
~=~h_1+\dots+h_t~,\end{aligned}$$ where, as in the proof of Proposition \[prop:a-universal\], for $i=1,\dots, t$, we have $g_i \in G$, $h_i:= \frac{{\mathrm{LT}}(r_{i-1})}{{\mathrm{LT}}(g_{i})}g_{i}$, and $r_i=f-h_1-\dots-h_i$ is the remainder after the $i$-th division (and $r_0:= f$). Also, each term of $h_i$ divides the leading term of $h_i$.
By construction, $g_i | h_i$. So, it suffices to show that there exists $i$ such that $h_i | f$.
We now claim that ${\mathrm{LT}}(h_i)|{\mathrm{LT}}(f)$ holds for all $i$. We prove this claim by induction on $i$. For the $i=1$ case, ${\mathrm{LT}}(h_1)={\mathrm{LT}}(f)$. If $i \geq 2$, then ${\mathrm{LT}}(h_i)$ is the leading term of: $$\begin{aligned}
\label{eq:r}
r_{i-1} = f-h_1-\dots - h_{i-1}~.\end{aligned}$$ We now examine the summands in . As $f$ is a pseudo-monomial, each term in $f$ divides ${\mathrm{LT}}(f)$, and the same holds for each remaining summand $h_i$: as noted above, its terms divide ${\mathrm{LT}}(h_i)$, and thus (by induction hypothesis) divide ${\mathrm{LT}}(f)$. So, ${\mathrm{LT}}(h_i)= {\mathrm{LT}}(r_{i-1}) | {\mathrm{LT}}(f)$, proving our claim.
We now assert that $h_i$ is a pseudo-monomial. To see this, recall that $h_i$ is the product of a monomial and a pseudo-monomial (namely, $g_i$), so we just need to show that its leading term is square-free. Indeed, this follows from two facts: ${\mathrm{LT}}(h_i)| {\mathrm{LT}}(f)$ and $f$ is a pseudo-monomial.
Hence, $H(h_i) \subseteq P(\sigma \cup \tau)$ for every $i$, because every term in $h_i$ divides ${\mathrm{LT}}(h_i)$ which in turn divides $x_{\sigma \cup \tau}={\mathrm{LT}}(f)$. Thus, by Lemma \[lem:hypdiv\], it is enough to show that $\left| H(h_i) \cap P(\sigma) \right| = 1$ for some $i$ (because this would imply that $h_i|f$).
The sum in is over $\mathbb{F}_2$, so the polynomials $f, h_1, \dots, h_t$ together must contain an even number of each term. We focus now on only those terms with support in $P(\sigma)$. The pseudo-monomial $f$ has only one such term (namely, $x_{\sigma}$). Thus, some $h_{i^*}$ has an odd number of terms in $P(\sigma)$, i.e., $\left| H(h_{i^*}) \cap P(\sigma) \right|$ is odd. On the other hand, both $H(h_{i^*})$ and $P(\sigma) $ are hypercubes in the Boolean lattice, so their intersection, if nonempty, also is a hypercube and thus has size $2^q$ for some $q \geq 0$. Hence, $q=0$, so $\left| H(h_{i^*}) \cap P(\sigma) \right|=1$. This completes our proof.
Proof of Theorem \[thm:main\] {#sec:proof-main-theorem}
-----------------------------
Theorem \[thm:divbyset\] allows us to prove that when a canonical form is a Gröbner basis, it is reduced:
\[thm:reduced\] If the canonical form of a neural ideal $J_C$ is a Gröbner basis of $J_C$, then it is a reduced Gröbner basis of $J_C$.
Suppose for contradiction that ${\mathrm{CF}}(J_C)$ is a Gröbner basis, but not a reduced Gröbner basis. Then there exist $f, g \in {\mathrm{CF}}(J_C)$, with $f \neq g$, such that ${\mathrm{LT}}(g)$ divides some term of $f$. Thus, ${\mathrm{LT}}(g)$ divides ${\mathrm{LT}}(f)$ (because every term in a pseudo-monomial divides the leading term). Thus, ${\mathrm{CF}}(J_C)$ and ${\mathrm{CF}}(J_C) \setminus \{f\}$ both generate the same ideal of leading terms, and hence ${\mathrm{CF}}(J_C) \setminus \{f\}$ is also a Gröbner basis of $J_C$. It follows that the remainder on division of $f$ by ${\mathrm{CF}}(J_C) \setminus \{f\}$ is 0, so by Theorem \[thm:divbyset\], there exists $h \in {\mathrm{CF}}(J_C) \setminus \{f\}$ such that $h | f$. Hence, $f$ is a non-minimal element of the canonical form, which is a contradiction.
Now we can prove Theorem \[thm:main\], which states that a canonical form that is a Gröbner basis is the universal Gröbner basis:
Follows from Propositions \[prop:a-universal\] and \[thm:reduced\].
Every pseudo-monomial in a reduced Gröbner basis is in the canonical form
-------------------------------------------------------------------------
In this subsection, we prove the following partial converse of Theorem \[thm:main\]: if the universal Gröbner basis of a neural ideal consists of only pseudo-monomials, then it equals the canonical form (Theorem \[thm:summary\]).
We first show that every pseudo-monomial in a reduced Gröbner basis is in the canonical form.
\[prop:pseudo-mon-in-reduced\] Let $J_C$ be a neural ideal.
1. Let $G$ be a reduced Gröbner basis of $J_C$. Then every pseudo-monomial in $G$ is in the canonical form of $J_C$.
2. Let $\widehat{G}$ be the universal Gröbner basis of $J_C$. Then every pseudo-monomial in $\widehat{G}$ is in the canonical form of $J_C$.
Let $f$ be a pseudo-monomial in $G$. Suppose that $f$ is [*not*]{} a minimal pseudo-monomial in $J_C$: for some pseudo-monomial $h\in J_C$ such that deg($h$)<deg($f$), $h|f$. Then for some $g\in G$, ${\mathrm{LT}}(g)|{\mathrm{LT}}(h)$. Hence, ${\mathrm{LT}}(g) | {\mathrm{LT}}(f)$ (because ${\mathrm{LT}}(h) | {\mathrm{LT}}(f)$) and also $g\neq f$ (because $\deg(g) \leq \deg(h) < \deg(f)$). This is a contradiction: $f$ and $g$ cannot both be in a reduced Gröbner basis. Finally, (2) follows directly from (1).
\[thm:summary\] Let $J_C$ be a neural ideal. The following are equivalent:
1. the canonical form of $J_C$ is a Gröbner basis of $J_C$,
2. the canonical form of $J_C$ is [the]{} universal Gröbner basis of $J_C$, and
3. [the]{} universal Gröbner basis of $J_C$ consists of pseudo-monomials.
The implication (1)$\Rightarrow$(2) is Theorem \[thm:main\], and both (1)$\Leftarrow$(2) and (2)$\Rightarrow$(3) are clear. For (3)$\Rightarrow$(1), assume that the universal Gröbner basis $\widehat{G}$ consists of pseudo-monomials. Then, by Proposition \[prop:pseudo-mon-in-reduced\](2), $\widehat{G}$ is contained in the canonical form of $J_C$. Thus, the canonical form contains a Gröbner basis of $J_C$ (namely, $\widehat{G}$) and hence is itself a Gröbner basis.
\[rem:universal\] Suppose we want to know whether a code’s canonical form is a Gröbner basis. Theorem \[thm:summary\] tells us how to do so [*without*]{} computing the canonical form: compute the universal Gröbner basis, and then check whether it contains only pseudo-monomials. See Example \[ex:universal-code\].
Under certain conditions, e.g. small number of neurons, computing the Gröbner basis is more efficient than computing the canonical form, but is there some way to avoid computations entirely and yet still decide whether the canonical form is a Gröbner basis? In the next section, we give conditions under which we can resolve this decision problem quickly.
\[ex:universal-code\] Consider the neural code $C = \{0100,0101,0111\}$. The universal Gröbner basis of $J_C$ is $\widehat{G} = \{ x_3(x_4 + 1) ,~ x_2 + 1,~ x_1\}$, so it contains only pseudo-monomials. Thus, by Theorem \[thm:summary\], $\widehat{G}$ is the canonical form.
\[ex:not-universal-code\] Consider the neural code $C = \{0101,1100,1110\}$. The universal Gröbner basis of $J_C$ is $\widehat{G} = \{x_4 x_3,~ x_3 (x_1 + 1),~ x_1 + x_4 + 1,~ x_2 + 1\}$, which contains the non-pseudo-monomial $x_1+x_4+1$. Thus, by Theorem \[thm:summary\], the canonical form is not a universal Gröbner basis of $J_C$. Indeed, the canonical form is ${\rm CF}(J_C)=\{x_3 (x_1 + 1),~ x_2 + 1,~ (x_4 + 1) (x_1 + 1),~ x_4 x_1,~ x_4 x_3\}$, and, for a monomial ordering where $x_4>x_1$, the leading term of the non-pseudo-monomial $x_1+x_4+1$ is $x_4$, which is [*not*]{} divisible by any of the leading terms from the canonical form.
When is the canonical form a Gröbner basis? {#sec:gb}
===========================================
In this section we present some results that partially solve the question of when is the canonical form a Gröbner basis for the neural ideal. A complete answer to this question is not only of theoretical interest but perhaps also of practical relevance. Extensive computations suggest that, under certain conditions, Gröbner bases of neural ideals can be computed more efficiently than canonical forms. This is true for small neural codes. Moreover, the iterative nature of the newer canonical form algorithm hints towards the ability to compute canonical forms and Gröbner bases of neural codes in large dimensions by ‘gluing’ those of codes on small dimensions. Such decomposition results are a common theme in other areas of applied algebraic geometry such as algebraic statistics and phylogenetic algebraic geometry [@AR08; @EKS14].
Table \[runtimes\] displays a runtime comparison between the iterative canonical form algorithm described in [@neural-ideal-sage] and a specialized Gröbner basis algorithm for Boolean rings implemented in SageMath based on the work in [@polybori]. We report the mean time (in seconds) of 100 randomly generated codes on $n$ neurons for $n = 4, \dots, 8$. More precisely, for each code, a number $m$ was chosen uniformly at random from $\{1,\dots, 2^n-1 \}$ and then $m$ codewords were chosen at random. These computations were performed on SageMath 7.2 running on a Macbook Pro with a 2.8 GHz Intel Core i7 processor and 16 GB of memory.
Dimension 4 5 6 7 8
---------------- --------- --------- --------- --------- ---------
Canonical form 0.0016 0.0076 0.108 0.621 1.964
Gröbner basis 0.00147 0.00202 0.00496 0.01604 0.16638
: Runtime comparison of canonical form versus Gröbner basis computations.[]{data-label="runtimes"}
For codes on a larger number of neurons, our computations indicate that in general Gröbner bases computations are still more efficient than canonical form computations. However, even in the case of $n=9$ neurons we found codes whose Gröbner bases took over 6 hours to be computed.
\[prop:size-of-C\] Let $C$ be a neural code on $n$ neurons. If $|C| = 1$ or $|C| = 2^n-1$, then the canonical form of $J_C$ is the universal Gröbner basis of $J_C$.
If $C = \{c\}$, then Lemma \[lemma\_3.2\] implies that $J_C = \langle x_1-c_1,~ x_2-c_2,~\dots,~ x_n-c_n\rangle$. When $|C| = 2^n-1$, then by definition $J_C = \langle \rho_v \rangle$ for the unique $v\notin C$. In either case, the indicated generating set is both the canonical form and the universal Gröbner basis of $J_C$.
A set of subsets $\Delta\subseteq 2^{[n]}$ is an (abstract) **simplicial complex** if $\sigma \in \Delta$ and $\tau \subseteq \sigma$ implies $\tau \in \Delta$. A neural code $C$ is a simplicial complex if its support $\mathrm{supp}(C)$ is a simplicial complex.
If $C$ is a simplicial complex, then the canonical form of $J_C$ is the universal Gröbner basis of $J_C$.
If $C$ is a simplicial complex, then $J_C$ is a monomial ideal generated by the minimal Type 1 relationships (indeed, it is the Stanley-Reisner ideal of the simplicial complex $\mathrm{supp}(C)$) [@neural_ring Lemma 4.4]. These minimal Type-1 relationships comprise the canonical form of $J_C$, and also form the universal Gröbner basis of $J_C$.
The next result gives conditions that guarantee that the canonical form is not a Gröbner basis.
Let $\mathcal{U}=\{U_i\}_{i=1}^n$ be a collection of sets in a stimulus space $X$, and let $C=C(\mathcal{U})$ denote the corresponding receptive field code. If one of the following conditions hold, then the canonical form of $J_C$ is [*not*]{} a Gröbner basis of $J_C$:
1. Two proper, nonempty receptive fields coincide: $\emptyset \neq U_i = U_j \subsetneq X$ for some $i\neq j \in [n]$.
2. Two nonempty receptive fields are complementary: $U_i = X \setminus U_j$ for some $i\neq j \in [n]$ with $U_i \neq \emptyset$ and $U_j \neq \emptyset$.
\(1) Suppose $U_i, U_j \in \mathcal{U}$ are two sets with $\emptyset \neq U_i = U_j \subsetneq X$. By Lemma \[lem:receptivefields\], both $f = x_i(x_j+1)$ and $g = x_j(x_i+1)$ are in $J_C$. In fact, $f$ and $g$ are minimal pseudo-monomials in $J_C$ (because $\emptyset \neq U_i=U_j \neq X$), so $f, g \in \mathrm{CF}(J_C)$. Under any monomial ordering, $\mathrm{LT}(f) = \mathrm{LT}(g) = x_ix_j$ (by Lemma \[lem:pseudo\]), so the set $\mathrm{CF}(J_C)$ is not reduced and thus cannot be a reduced Gröbner basis. Hence, by Proposition \[thm:reduced\], $\mathrm{CF}(J_C)$ cannot be a Gröbner basis.
\(2) Now assume that $U_i = X \setminus U_j$ for some $i\neq j \in [n]$, with $U_i \neq \emptyset$ and $U_j \neq \emptyset$. Thus, $U_i\cap U_j = \emptyset$ and $U_i \cup U_j = X$, so Lemma \[lem:receptivefields\] implies that $f = x_ix_j$ and $g = (x_i+1)(x_j+1)$ are in $J_C$. Now we proceed as in the previous paragraph: $f$ and $g$ are minimal pseudo-monomials in $\mathrm{CF}(J_C)$, and $\mathrm{LT}(f) = \mathrm{LT}(g) = x_ix_j$, so, by Proposition \[thm:reduced\], $\mathrm{CF}(J_C)$ cannot be a Gröbner basis.
The last result in this section concerns a class of codes that we call **complement-complete**.
The **complement** of $c \in \{0,1\}^n$ is the codeword $\overline{c} \in \{0,1\}^n$ defined by $\overline{c}_i = 1$ if and only if $c_i = 0$. A neural code $C$ is **complement-complete** if for all $c \in C$, then $\overline{c}$ is also in $C$.
\[ex:comp-compcode\] The complement of the codeword $c_1=1000$ is $\overline{c_1}=0111$, and the complement of $c_2=1010$ is $\overline{c_2}=0101$. Thus, the code $C=\{1000,0111,1010,0101\}$ is complement-complete.
The **complement** of a pseudo-monomial $f = x_{\sigma}\prod_{i\in \tau}(1+x_i)$ is the pseudo-monomial $\overline{f} = x_\tau\prod_{j\in \sigma}(1+x_j)$.
\[lem:bar\] Consider pseudo-monomials $f = x_{\sigma}\prod_{i\in \tau}(1+x_i)$ and $g = x_{\sigma'}\prod_{i\in \tau'}(1+x_i)$. If $f$ divides $g$, then $\overline{f}$ divides $\overline{g}$.
This follows from the fact that $f\mid g$ if and only if $\sigma'\subseteq \sigma$ and $\tau' \subseteq \tau$ (Lemma \[lem:hypdiv\]).
\[prop:complement-complete\] Let $C$ be a code on $n$ neurons, with $C \subsetneq \{0,1\}^n$. If $C$ is complement-complete, then the canonical form of $J_C$ is [*not*]{} a Gröbner basis of $J_C$.
Note that since $C \neq \{0,1\}^n$, $J_C$ is not trivial. We make the following claim:\
[Claim:]{} If $h$ is a pseudo-monomial in $J_C$, then $\overline{h}$ is also in $ J_C$.\
To see this, let $S$ be the set of all degree-$n$ pseudo-monomials in $\mathbb{F}_2[x_1,\dots, x_n]$ that are multiples of $h$ (so, $S \subseteq J_C$). Degree-$n$ pseudo-monomials in $\mathbb{F}_2[x_1,\dots, x_n]$ are characteristic functions $\rho_v$, so, every element of $S$ is some $\rho_v$, where $v \notin C$. Thus, every element of $\overline{S}:=\{\overline f \mid f \in S \}$ has the form $ \overline{\rho_v} = \rho_{\overline{v}} $, where $v \notin C$, which is equivalent to $\overline{v} \notin C$, as $C$ is complement-complete. So, $\overline{S} \subseteq J_C$.
Next, let $s \in S$, that is, $s = hq$ for some pseudo-monomial $q$. Then $h\overline{q}$ is also in $S$. Since $\gcd(q,\overline{q}) = 1$, it follows that $h = \gcd(hq, h \overline{q})$, so $h = \gcd\{S\}$. Thus, $\overline{h} = \gcd\{\overline{S}\}$, so $\overline{h} \in J_C$ (because $\overline{S} \subseteq J_C$), which proves the claim.
Now let $f \in CF(J_C)$. By the claim, $\overline{f}$ is in $J_C$, and now we assert that, like $f$, the pseudo-monomial $\overline{f}$ is in $CF(J_C)$. Indeed, if a pseudo-monomial $d$ in $J_C$ divides $\overline{f}$, then by Lemma \[lem:bar\], the pseudo-monomial $\overline{d}$ divides $f$. Also, $\overline{d} \in J_C$ (by the claim), so $\overline{d}=f$ (because $f$ is minimal), and thus $d = \overline{f}$. Hence, $\overline{f}$ is minimal, and so $\overline{f}$ is also in $CF(J_C)$. Thus, $CF(J_C)$ contains two polynomials ($f$ and $\overline{f}$) with the same leading term, and so is not a reduced Gröbner basis, and thus (by Proposition \[thm:reduced\]) is not a Gröbner basis of $J_C$.
Consider again the complement-complete code $C=\{1000,0111,1010,0101\}$ from Example \[ex:comp-compcode\]. The canonical form is $CF(J_C)=\{(x_1+1)(x_2+1),~(x_1+1)(x_4+1),~x_1x_2,~x_2(x_4+1),~x_1x_4,~x_4(x_2+1)\}$. Note that $CF(J_C)$ is itself complement-complete; for example, $f=x_2(x_4+1)$ and $\overline{f}=x_4(x_2+1)$ are both in $CF(J_C)$. Also, we can show directly that $CF(J_C)$ is not a Gröbner basis, which is consistent with Proposition \[prop:complement-complete\]: with respect to any monomial ordering, the leading term of $f+\overline{f}=x_2+x_4$ is not divisible by any of the leading terms in $CF(J_C)$.
New receptive-field relationships {#sec:new-RF}
=================================
We saw earlier that if the universal Gröbner basis of a neural ideal consists of only pseudo-monomials, then it equals the canonical form (Theorem \[thm:summary\]). When this is not the case, there are non-pseudo-monomial elements in the universal Gröbner basis, so it is natural to ask what they tell us about the receptive fields of the code. In other words, what types of RF relationships, besides those of Types 1–3 (Lemma \[lem:receptivefields\]), appear in Gröbner bases? Here we give a partial answer:
\[thm:new-RF-relns\] Let $\mathcal{U}=\{U_i\}_{i=1}^n$ be a collection of sets in a stimulus space $X$. Let $C=C(\mathcal{U})$ denote the corresponding receptive field code, and let $J_C$ denote the neural ideal. Then for any subsets $\sigma_1,\sigma_2,\tau_1,\tau_2 \subseteq [n]$, and $m$ indices $1 \leq i_1 < i_2 < \dots < i_m \leq n$, with $m \geq 2$, we have RF relationships as follows:
- $x_{\sigma_1} \prod_{i \in \tau_1} (1 + x_i) + x_{\sigma_2} \prod_{j \in \tau_2} (1 + x_j) \in J_C$ $\Rightarrow$ $U_{\sigma_1}\cap \left( \bigcap_{i\in \tau_1}U_i^c \right) = U_{\sigma_2}\cap \left( \bigcap_{j\in \tau_2}U_j^c\right)$.
- $x_{i_1}+ \dots +x_{i_m} \in J_C$ $\Rightarrow$ $U_{i_k}\subseteq\bigcup_{j\in [m] \setminus \{k\} }U_{i_j}$ for all $k=1, \dots, m$, and if, additionally, $m$ is odd, then $\bigcap_{k=1}^m U_{i_k}=\emptyset$.
- $x_{i_1}+ \dots +x_{i_m}+1 \in J_C$ $\Rightarrow$ $\bigcup_{k=1}^m U_{i_k}=X$.
Throughout the proof, for $p \in X$, we let $c(p)$ denote the corresponding codeword in $C$.
[Type 4.]{} Let $f_1:=x_{\sigma_1} \prod_{i \in \tau_1} (1+x_i)$, and let $f_2:=x_{\sigma_2} \prod_{j \in \tau_2} (1+x_j)$. Also, let $W_1 := U_{\sigma_1} \cap \left( \bigcap_{i\in \tau_1}U_i^c \right)$, and let $W_2 := U_{\sigma_2} \cap \left( \bigcap_{j\in \tau_2}U_j^c \right)$. By symmetry, we need only show that $W_1 \subseteq W_2$. To this end, let $p \in W_1$ (so, $c(p) \in C$). First, because $f_1+f_2 \in J_C$ and $V(J_C)=C$, it follows that $f_1(c(p))=f_2(c(p))$. Next, for $i=1,2$, we have $p \in W_i$ if and only if $f_i(c(p))=1$. Thus, $p \in W_2$. [Type 5.]{} Let $g:= x_{i_1}+\dotsm +x_{i_m}$. By symmetry, we need only show that $U_{i_1} \subseteq \bigcup_{l=2}^m U_{i_l}$. To this end, let $ p \in U_{i_1}$ (so, $c(p)_{i_1}=1$). Then $g \in J_C$ implies the following equality in $\mathbb{F}_2$: $$\begin{aligned}
\label{eq:g}
0 ~=~ g(c(p)) ~=~ c(p)_{i_1} + c(p)_{i_2} + \dots + c(p)_{i_m} ~=~ 1 + c(p)_{i_2} + \dots + c(p)_{i_m}~.\end{aligned}$$ Thus, for some $k \geq 2$, we have $c(p)_{i_k}=1$, i.e., $p \in U_{i_k}$. Hence, $p \in \bigcup_{l=2}^m U_{i_l}$.
Now assume, additionally, that $m$ is odd. Suppose, for contradiction, that there exists $q \in \bigcap_{k=1}^m U_{i_k}$. Then, like the sum above, we have $0=g(c(q))=1+\dots+1=m$, which contradicts the hypothesis that $m$ is odd. So, $\bigcap_{k=1}^m U_{i_k} = \emptyset$.
[Type 6.]{} Let $h:=x_{i_1}+ \dots +x_{i_m}+1$. Let $p \in X$ (so, $c(p) \in C$). We must show that $p \in \bigcup_{k=1}^m U_{i_k}$. Because $h \in J_C$, we have $0 = h(c(p)) = c(p)_{i_1} + \dots + c(p)_{i_m} + 1$. Thus, for some $k \in [m]$, we have $c(p)_{i_k}=1$, i.e., $p \in U_{i_k}$. Hence, $p \in \bigcup_{k=1}^m U_{i_k}$.
\[rem:equality\] Like the earlier RF relationships (those of Types 1–3 from Lemma \[lem:receptivefields\]), some of our new ones (Types 4–6) are containments and some are equalities.
\[ex:type-6\] Recall the code $C = \{0101,1100,1110\}$, from Example \[ex:not-universal-code\], for which the universal Gröbner basis of $J_C$ is $\widehat{G} = \{x_4 x_3,~ x_3 (x_1 + 1),~ x_1 + x_4 + 1,~ x_2 + 1\}$. The polynomial $x_1+x_4+1$ encodes a Type 6 relationship: $U_1\cup U_4=X$. Also, the polynomial $x_2+1$ encodes a Type 3 relationship: $U_2=X$, which together gives us $U_1\cup U_4=U_2$. The canonical form also contains the polynomial $x_1x_4$, which encodes a Type 1 relationship: $U_1\cap U_4=\emptyset$. We conclude that $U_1 \dot \cup U_4=U_2$.
\[ex:types-4-5\] Consider the code $C=\{00,11\}$. The universal Gröbner basis of $C$ is $\widehat{G} = \{ x_1(1+x_1),~ x_1+x_2, ~x_2(1+x_2) \}$. The polynomial $x_1+x_2$ encodes a Type 4 relationship: $U_1=U_2$. (The polynomial $x_1+x_2$ also encodes Type 5 relationships.) This points to one of the advantages of our new RF relationships: we can read off some set equalities more quickly than from the canonical form. Indeed, the canonical form is ${\rm CF}(J_C)=\{x_1(1+x_2),~x_2(1+x_1) \}$, in which the Type 2 relationships are $U_1 \subseteq U_2$ and $U_2 \subseteq U_1$ – and only from there do we infer the equality $U_1=U_2$.
Discussion {#sec:discussion}
==========
In this work, we proved that if a code’s canonical form is a Gröbner basis of the neural ideal, then it is the universal Gröbner basis. Additionally, we gave conditions that guarantee or preclude this situation, and found three new types of receptive-field relationships that arise in neural ideals. Going forward, there are natural extensions to pursue:
1. Give a complete characterization of codes for which the canonical form is a Gröbner basis.
2. Beyond those of Types 1–6, what other receptive-field relationships can be read off from a Gröbner basis, and what do they tell us about a code?
Solutions to these problems would help us extract information about the receptive-field structure directly from the neural code.
Finally, we expect that our results can be used to improve canonical-form algorithms. Indeed, our experiments indicate that under certain conditions, Gröbner bases can be computed more efficiently than canonical forms. Moreover, every pseudo-monomial in the universal Gröbner basis of a neural ideal is in the canonical form – so, that subset of the canonical form can be obtained directly from the universal Gröbner basis. And, in the case when the universal Gröbner basis contains only pseudo-monomials, then we can conclude immediately that the basis is in fact the canonical form. Moreover, we hope to develop decomposition results to build canonical forms and Gröbner basis of codes in large dimensions by ‘gluing’ those of codes in smaller dimensions.
Acknowledgments {#acknowledgments .unnumbered}
---------------
[ DM, RK, and EP conducted this research as part of the 2015 Pacific Undergraduate Research Experience in Mathematics Interns Program funded by the NSF (DMS-1045147 and DMS-1045082) and the NSA (H98230-14-1- 0131), in which RG and LG served as mentors and KP was a GTA. JL conducted this research as part of the 2016 NSF-funded REU in the Department of Mathematics at Texas A&M University (DMS-1460766), in which AS served as mentor and KP was a GTA. The authors thank Ihmar Aldana, Carina Curto, Vladimir Itskov, and Ola Sobieska for helpful discussions. LG was supported by the Simons Foundation Collaboration grant 282241. AS was supported by the NSF (DMS-1312473/DMS-1513364). The authors thank an anonymous referee for helpful comments which improved this work. ]{}
|
---
abstract: |
Firewall configuration is critical, yet often conducted manually with inevitable errors, leaving networks vulnerable to cyber attack [@wool2004]. The impact of misconfigured firewalls can be catastrophic in networks. These networks control the distributed assets of industrial systems such as power generation and water distribution systems. Automation can make designing firewall configurations less tedious and their deployment more reliable.
In this paper, we propose *ForestFirewalls*, a high-level approach to configuring firewalls. Our goals are three-fold. We aim to: first, decouple implementation details from security policy design by abstracting the former; second, simplify policy design; and third, provide automated checks, pre and post-deployment, to guarantee configuration accuracy. We achieve these goals by automating the implementation of a policy to a network and by auto-validating each stage of the configuration process. We test our approach on a real SCADA network to demonstrate its effectiveness.
author:
-
bibliography:
- 'myBibliography.bib'
nocite:
- '[@byres2005]'
- '[@cheswick2003]'
- '[@stouffer2008]'
- '[@byres2012]'
- '[@isa2007]'
- '[@jamieson2009]'
- '[@mayer2000]'
- '[@bellovin2009]'
- '[@cisco2010]'
- '[@ranathunga2015]'
- '[@yuan2006]'
- '[@liu2008]'
- '[@pizzonia2008]'
- '[@howe1996]'
- '[@rubin1998]'
- '[@cisco2010]'
- '[@cisco2014]'
- '[@juniper2011]'
- '[@checkpoint2008]'
- '[@bartal1999s]'
- '[@pearce1998]'
- '[@knight2013]'
- '[@libes1995]'
- '[@soule2014]'
- '[@jackson2012]'
- '[@bbc2014]'
- '[@levin2013]'
- '[@gutz2012]'
- '[@anderson2014]'
- '[@foster2011]'
- '[@cisco2014b]'
- '[@ranathunga2015T]'
- '[@ranathunga2015V]'
- '[@ranathunga2015P]'
- '[@ranathunga2015W]'
- '[@patrick1995]'
- '[@prakash2015]'
title: 'ForestFirewalls: Getting Firewall Configuration Right in Critical Networks (Technical Report)'
---
firewall auto-configuration, SCADA network security, security policy, policy verification, Zone-Conduit model.
|
---
abstract: 'Network Functions Virtualization (NFV) and Network Coding (NC) have attracted much attention in recent years as key concepts for providing 5G networks with flexibility and differentiated reliability, respectively. In this paper, we present the integration of NC architectural design and NFV. In order to do so we first describe what we call a virtualization process upon our proposed architectural design of NC that should help to offer the reliability functionality to a network. The process consists of identifying the required functional entities of NC and analyzing when the functionality should be activated towards complexity/energy efficiency. The relevance of our proposed NC function virtualization is its applicability to any underlying physical network, satellite or hybrid thus enabling softwarization, and rapid innovative deployment. Finally, we validate our framework to a study case of geo-control of network reliability that is based on device’s geographical location-based signal/network information.'
author:
- |
Tan Do-Duy, M. Angeles Vazquez Castro\
Dept. of Telecommunications and Systems Engineering\
Autonomous University of Barcelona, Spain\
Email: {tan.doduy, angeles.vazquez}@uab.es
bibliography:
- 'NCFV.bib'
title: Network Coding Function Virtualization
---
Introduction \[sec:intro\]
==========================
NC has recently emerged as a new approach for improving network performance in terms of throughput and reliability, especially in wireless networks. Coding operations are executed at the source and/or re-encoded at intermediate nodes so that receivers are able to recover from losses at the receiver side.
NFV has been proposed as a promising way the telecommunications sector facilitates the design, deployment, and management of networking services. Essentially, NFV separates software from physical hardware so that a network service can be implemented as a set of virtualized network functions through virtualization techniques and run on commodity standard hardware. NFV can be instantiated on demand at different network locations without requiring the installation of new equipment. This key idea helps network operators deploy new network services faster over the same physical hardware which reduces capital investment and the time to market of a new service. Now it is the turn for the telecommunications technologies to detach software from the hardware where the telecommunications functions run (which makes equipment seller-dependent and expensive). Such detachment demands functionalities and infrastructures to be designed under such new paradigm.
The integration of NC and virtualization opens applicability of NC functionalities in future networks (e.g. upcoming 5G networks) to both distributed (i.e. each network device) and centralized manners (i.e. servers or service providers). The European Telecommunications Standards Institute (ETSI) has proposed some use cases for NFV in [@NFV001.2013], including the virtualization of cellular base station, mobile core network, home network, and fixed access network, etc. In particular, there are already available some other efforts in the combination of NC virtualization and SDN technology. For example, in [@Szabo2.2015], the authors investigate the usability of random linear NC as a function in SDNs to be deployed with virtual (software) OpenFlow switches. The NC functions including encoder, re-encoder, and decoder are run on a virtual machine (ClickOS). This work provided a prototype of combination of NC and SDN. It also includes the implementation of additional network functions via virtual machines (VMs) by sharing system resources or additional hardware (e.g. FPGA cards). In [@Hansen.2015], integration of network coding run in a VM into an Open vSwitch shows the relation between coding software, VM and host OS. The paper indicated the integration of NC as a functionality of a software defined network is possible. However, a unified theoretical framework for NC design in view of NFV is currently missing.
The contributions of this paper can be summarized as follows:
- An architectural design framework for NC is proposed, which includes a functional domain design that enables virtualization-based design.
- A virtualization process is presented by which NC can operate as a reliability functionality to any physical underlying network.
- We validate our framework for an illustrative study case of geo-control network reliability in which the geographical location-based information of network devices has been exploited in the definition of the required functional entities.
The rest of this paper is organized as follows. In Section \[sec:NCframework\], we propose the architectural design of NC. In Section \[sec:Virtualization\], we present higher-level architecture and virtualization process identifying an instance of NC functionalities to the network. In Section \[sec:Reliability\], we show when NC functionality is activated considering complexity/energy efficiency and also analyze the feasibility of the ideas with today’s technologies via a use case of NCFV for geo-controlled networks. Finally, in Section \[sec:CONC\], we identify conclusion and further work.
NC Architectural Design Domains \[sec:NCframework\]
===================================================
Virtualization and NC are two different techniques to address different challenges in the designs of upcoming network technologies. The combination of virtualization and NC brings forth a potential solution for the management and operation of the future networks [@Hansen.2015]. In order to do so, we present a proposal of distinct design domains. In order to distinguish the different abstraction processes: the ones related to NC and the ones related to NCFV.
Such domains are proposed as follows [@MAVazquez.2015]:
- **NC coding domain** - domain for the design of coding theoretical network codes, encoding/decoding algorithms, performance benchmarks, appropriate mathematical-to-logic maps, etc.
- **NC functional domain** - domain for the design of the functional properties of NC to match design requirements built upon abstractions of
- **Network**: by choosing a reference layer in the standardized protocol stacks and logical nodes for NC and re-encoding operations.
- **System**: by abstracting the underlying physical or functional system at the selected layer e.g. SDN and/or function virtualization.
- **NC protocol domain** - domain for the design of physical signaling/transporting of the information flow across the virtualized physical networks in one way or interactive protocols.
The domain relevant for NC as NFV is NC functional domain and we develop this domain in the next section by first interpreting NC as a network service for control of reliability. It is noted that use cases of NFV mentioned (e.g. [@NFV001.2013]) investigate the concentration of network functions in centralized architectures such as data centers or centralized locations proposed by network operators, service integrators and providers. Nevertheless, our proposal is of distributed nature, closer to Distributed-NFV (D-NFV) [@RAD.2014], which provides distribution of NFV throughout the network. Thus NCFV can be located at various parts of the network including both centralized locations and distributed network nodes.
NC Function Virtualization \[sec:Virtualization\]
==================================================
NC as a Network Functionality \[sec:Service\]
---------------------------------------------
![(a) Layer-independent functional view of reliability layer and (b) graph presentation denotes higher-level architecture of NCFV in which NC is a service of the network.\[fig:NC-as-Service\]](GeneralFunctional.eps)
![(a) Layer-independent functional view of reliability layer and (b) graph presentation denotes higher-level architecture of NCFV in which NC is a service of the network.\[fig:NC-as-Service\]](Forwarding.eps)
In Fig. \[fig:NC-as-Service\](a), we illustrate the layer-independent functional view of a reliability layer at which NC functionality would operate which we will identify in detail in Section \[sec:Process\]. Assume that the NC functionalities have been designed well. Then, in view of higher-level architecture of NCFV, Fig. \[fig:NC-as-Service\](b) shows that the small boxes in green work according to the NC functionalities we have identified while at the same time the overall design is compliant with existing proposals of NFV architectures such as [@NFV002.2013]. In terms of D-NFV, NCFV may be resided anywhere in the network, e.g. data centers, central offices, and mobile devices, etc.
The physical/network/system infrastructure, e.g. satellite networks or terrestrial wireless systems, represents the physical resources including computing, storage, and network that provide processing, storage, and connectivity to NCFV through virtualization layer respectively. The difference will be that every virtual infrastructure will have its corresponding time/space scales and communication/computation resources.
In the architectural view, the virtualization layer makes sure that the NCFV is separated from hardware resources so that the softwarized function can be deployed on different physical hardware resources. Typical solutions for the deployment of NFV are hypervisors and VMs. Furthermore, NFV can be realized as an application software running on top of a non-virtualized server by means of an operating system (OS) [@NFV002.2013].
Virtualization Process \[sec:Process\]
--------------------------------------
Based on the key features of NFV design, we identify preliminary NC functionalities so that NC can provide the reliability as a service to network (see Fig. \[fig:NC-as-Service\](a)). In particular, we define common functionalities that NC design needs for system-specific designs [@MAVazquez2.2015] as follows:
- **NC core functions**
- NC Logic: logical interpretation of coding use, coding scheme selection (intra-session/inter-session, coherent/incoherent, file-transfer/streaming, systematic/non-systematic), coding coefficients selection (random/deterministic), etc.
- NC Coding: elementary encoding/re-encoding/decoding operations, encapsulation/de-encapsulation, adding/removing headers, etc.
- **NC interoperable functions**
- NC Resource Allocation/Adaptation: optimal allocation of NC parameters.
- NC Congestion Control: controlling congestion, interoperable with other congestion-control algorithms.
- **NC console functions**
- NC Storage: interactions with physical memory.
- NC Feedback Manager: settings for feedback.
- NC Signaling: coordinating signaling parameters.
At the NC core blocks, interactions with other nodes bring into agreement on coding schemes, coefficients selection, etc. NC coding block receives all inputs regarding coding scheme, coefficients, coding parameters, packets from storage block, and signaling to perform elementary encoding/re-encoding/decoding operations, etc.
At the NC interoperable blocks, NC adaptation block generates optimal coding parameters to the coding operation block. The NC interoperable functions also contain NC geo-information block which provides geographical location-based information and required level of reliability to the resource allocation process.
At the last stage, NC console blocks, which connect directly to physical storage and feedback from network nodes to provide NC packets and packet loss to the upper stage respectively. Depending on every specific design, the NC function domain blocks allow the designer to adjust NC functionalities according to the technical requirements and NC design objectives.
It is noted that in NFV, a virtualized NC function is a software package that implements such network function. Our proposed functionalities allows for different (use-case driven) possible implementations of such package.
Case Study: Geo-controlled Network Reliability \[sec:Reliability\]
==================================================================
Scenario \[sec:Scenario\]
-------------------------
Fig. \[fig:Geo\] illustrates two cases of geo-control of reliability called conventional and ultra-reliable communication, respectively. The former case utilizes cooperation with neighboring devices as repeaters in line networks or multi-hop networks to support communication services beyond the cell coverage. Whereas the latter case constitutes connections with surrounding devices to support communication links beyond the coverage of cellular networks in which NCFV can be located at distributed network nodes including user terminals, data centers, drone, and satellites to offer the reliability functionality to the network.
![Network abstraction showing geo-control of reliability functionality.\[fig:Geo\]](Geo-NCFV.eps "fig:")\[fig:Geo-NCFV\]
In the following, we are to validate our proposed framework, but is out of the scope of this paper the actual implementation over VMs.
Optimal strategy of virtual reliability function\[sec:optimized\]
-----------------------------------------------------------------
In this section we are interested to validate NCFV operation as a (virtual) functionality.
### Analytical model
Let $\delta$ be the vector of per-hop erasure rates e.g. $\delta=(\delta_{0},\delta_{1})$ for 2 hops and $R=n/m$ be the coding rate and $P_{L}^{NC}(R,\delta)$ be the packet loss rate after decoding. The packet successfully-received rate after decoding is defined as $P_{R}^{NC}(R,\delta)=1-P_{L}^{NC}(R,\delta)$. Let $L$ and $s=L/q$ denote packet length in bits and in symbols, respectively. Let $N^{mult}=(m-n).n.s$ and $N^{add}=(m-n).(n-1).s$ denote the complexity in terms of number of multiplications and additions, respectively, required for the encoding process.
We are interested in investigating when NC should be activated in terms of (1) computational complexity/energy consumption and (2) minimum requirement of packet delivery rate ($\rho_{0}$). In order to do so, we define the following utility function: $$u^{act}(R,\delta,\rho_{0})=w_{NC}.f^{NC}(R,\delta,\rho_{0})-w_{comp}.f^{comp}(R)$$ where the first factor accounts for the goodness of the encoding/decoding scheme in achieving target performance $\rho_{0}$. The second factor accounts for the cost in computational complexity/energy consumption. Whereas $w=(w_{NC},w_{comp})$ with $w_{NC},w_{comp}\in(0,1)$ and $w_{NC}+w_{comp}=1$ are weigh factors with respect to the goodness and cost.
We define the goodness and cost, respectively as follows
$$f^{NC}(R,\delta,\rho_{0})=\frac{P_{R}^{NC}(R,\delta)-\rho_{0}}{\rho_{0}}$$
$$f^{comp}(R)=log(1+\bar{\beta}^{enc}(R))$$
with $\bar{\beta}^{enc}(R)=\frac{\beta^{enc}(R)}{\beta_{min}^{enc}}$ is the ratio between the encoding complexity and the encoding with minimum redundancy. We assume that the source has limitation on computational complexity/energy $\bar{\beta}^{enc}(R)\le\beta_{0}$ (e.g. $\beta_{0}=1.4$).
The source identifies at which rate that it maximizes its own utility under constraint of complexity/energy by the following optimization strategy:
$$\underset{R}{argmax}\:u^{act}(R,\delta,\rho_{0},w)$$
$$subject\:to\:\bar{\beta}^{enc}(R)\le\beta_{0}.$$
This strategy determines the upper bound for the benefit of the source and the highest performance $P_{R}^{NC}(R,\delta)$, respectively that the source can provide to the destination according to a given $(\delta,\rho_{0})$. NC functionality is activated if $u^{act}(R,\delta,\rho_{0})\ge u_{0}$, where $u_{0}$ is a threshold. Note that given the constraints, our virtual reliability functionality should self-adapt to underling (physical) channel and hardware computational limitations.
### Numerical results
Our proposed optimization strategy Eq. (4) identifies the optimal points that bring highest benefit for the source’s viewpoint. In addition, it also identifies the upper bound on the achievable performance for the configuration of the functionality given as $(\delta,\rho_{0},w)$.
Figures \[fig:Optimized-strategy\] shows maximum utility and respective $P_{R}^{NC}(R,\delta)$ for e.g. NC over 2-hop networks [@Saxena2.2015] according to various values of $(\delta_{1},\rho_{0})$ with $\beta_{0}=1.4$, $\delta_{0}=0.2$, and $\delta_{1}\text{\ensuremath{\in}[0, 0.4]}$. We consider the optimization problem with small and large values of $w_{comp}$, respectively. In case of $w_{comp}=0.1$, we can see that when channel condition is good (i.e. $\delta_{1}$ is small), the source should activate NC functionality with high coding rate in order to maximize its own utility with the optimal $P_{R}^{NC}(R,\delta)$ is around 0.8. While NC functionality should be activated with low coding rate since $\delta_{1}$ increases. Then, $P_{R}^{NC}(R,\delta)$ reaches to 1. In cases of bad channel conditions (e.g. $\delta_{1}=0.4$), maximum utility and respective $P_{R}^{NC}(R,\delta)$ are very low due to strong impact of channel conditions. In such cases, the source should deactivate NC functionality. On the other hand, for $w_{comp}=0.8$ (i.e. high cost in terms of complexity/energy), the results indicate that maximum utility obtained is very small if compared with the case of small $w_{comp}$. Moreover, respective $P_{R}^{NC}(R,\delta)$ is small and reduced gradually with the increase of $\delta_{1}$. Therefore, in these cases, NC functionality should not be activated.
![Maximum utility and respective $P_{R}^{NC}(R,\delta)$ according to various values of $\rho_{0}$ and $\delta_{1},$ with $w_{comp}=0.1$ and $0.8$.\[fig:Optimized-strategy\]](PDR_MaxU.eps)
The upper bound on the achievable utility has been identified by the constraint of complexity/energy. Then, it is necessary to identify operative ranges of performance so that the destination user is aware and admits some variations in the quality. The cognitive algorithm to identify the operative ranges is briefly described as follows: (1) the source identifies maximum utility ($u_{max}^{act}(R,\delta,\rho_{0},w)$) and respective $R,\:P_{R}^{NC}(R,\delta)$ given $(\delta,\rho_{0},w)$, (2) the source then determines $R$ that satisfies $u_{min}^{act}(.)$ $\leq u^{act}(R,\delta,\rho_{0},w)\leq u_{max}^{act}(.)$, where $u_{min}^{act}(.)$ is the lower bound e.g. $u_{min}^{act}(.)=0.8u_{max}^{act}(.)$, and (3) the source should activate NC functionality if the range of performance is acceptable by users.
![Operative ranges of performance with respect to (a) $\delta_{1}=0.03$ and (b) $\delta_{1}=0.2$, with $w_{comp}=0.1$.\[fig:BarPlot\]](BarPlot.eps)
Fig. \[fig:BarPlot\] shows the operative ranges of achievable performance according to $m=50$, $L=100B$, $\delta_{1}=0.03$ and $0.2$, respectively. We can observe that larger value of $\delta_{1}$ requires smaller coding rates to obtain highest benefit for the source (i.e. a larger number of redundant packets). Furthermore, the results show that in case of large $\delta_{1}$, the range of $P_{R}^{NC}(R,\delta)$ with regard to each case of $\rho_{0}$ is also smaller than that of small $\delta_{1}$. As a final remark, we note that with larger $\delta_{1}$, it is required lower coding rates, i.e. higher cost in terms of computational complexity/energy consumption to reach the optimal points.
In addition, we also evaluate computational complexity in terms of logic gates. The implementation of addition and multiplication over $GF(2^{q})$ corresponds to $q$ and $2q^{2}+2q$ gates, respectively [@Angelopoulos.2011]. Figure \[fig:Complexity\] indicates the complexity in terms of logic gates with respect to operative ranges in Figure \[fig:BarPlot\]. The wider range of the peformance results in a larger difference between complexity required for minimum and maximum $P_{R}^{NC}(R,\delta)$ . Therefore, it is apparent to note that there exists a tradeoff between network performance and computational complexity.
![Complexity in terms of number of logic gates according to $\delta_{1}=0.03$ and $\delta_{1}=0.2$, respectively.\[fig:Complexity\]](complexity.eps)
Gain in increase of connectivity\[sec:Gain\]
--------------------------------------------
The increase in connectivity is defined as the extension of communication range beyond cellular coverage given a requirement of reliability. We evaluate the potential of our solution for the increase of connectivity through simulation results. We assume that packet erasure rates for all links are $0.03$. Whereas $w_{comp}=0.1$ and $\rho_{0}=0.9$, respectively. The analysis of $P_{R}^{NC}(R,\delta)$ indicating the ability of the NC virtualized design in supporting geo-controlled network reliability is conducted in [@Tan.2015]. It is assumed that a great number of network devices are uniformly distributed in a deployment area of $6$ $km$ $\times$ $6$ $km$. Then, the ratio of the deployment area and the number of devices defines network density i.e. the average area covered by a device. A base station (BS) is located at center of the area with maximum radio range of 1.5 km. It is assumed that maximum radio range of WiFi signal on each device is $50$ $m$. The number of devices is $10200$0 and $150000$, respectively with 1 device per $350$ $m^{2}$ and $1$ device per $250$ $m^{2}$. By limited radio range of BS, the deployment area is divided into two regions: within cellular coverage and beyond cellular coverage.
Using statistics based on some uniform symmetry assumption over the 2D coverage, we identify the number of available neighbors for a given density. For high uniform density of devices and all having the same loss rate, we could compute the percentage of available relays around a device, but also of how many hops each device could have. The same is for all devices in the deployment area. Whereas for low density that percentage would decrease. Then, we have a law of how the $P_{R}^{NC}(R,\delta)$ varies with distance for different densities, which corresponds to a given loss rate and the number of hops. The ratio of the short-range radio coverage and device’s density identifies the number of neighbors around a device, i.e. relays that may help with improving network reliability. The distance is normalized by coverage range of cellular signal to compare how much the coverage is improved beyond cellular range.
Required $P_{R}^{NC}(R,\delta)$ 95 % 90 % 85 %
--------------------------------- ------- -------- --------
With Geo-NC 27 % 60 % 87.5 %
Without Geo-NC 1.5 % 12.5 % 20 %
Gain 18 4.8 4.3
: GAIN IN INCREASE OF CONNECTIVITY WITH/WITHOUT GEO-NC FOR A (HIGH) DENSITY OF 1 DEVICE PER 250 $m^{2}$.\[tab:Table3\]
Required $P_{R}^{NC}(R,\delta)$ 95 % 90 % 85 %
--------------------------------- ------ -------- --------
With Geo-NC 12 % 19.5 % 25.5 %
Without Geo-NC 1 % 9.5 % 16.5 %
Gain 12 2.1 1.6
: GAIN IN INCREASE OF CONNECTIVITY WITH/WITHOUT GEO-NC FOR A (LOW) DENSITY OF 1 DEVICE PER 350 $m^{2}.$\[tab:Table4\]
Focusing on the performance of geo-control, Tables \[tab:Table3\] and \[tab:Table4\] show $P_{R}^{NC}(R,\delta)$ (i.e. reliability) for different densities corresponding to a given loss rate. In particular, we note that with NC functionality, the requirement of $90\%$ can be reached at normalized distance of $1.6$, i.e. $60\%$ extension beyond the cellular coverage in case of 1 device per 250 $m^{2}$. The reason is that spatial diversity created by up to 4 relays increases opportunity for the network-coded packets to reach the destinations. Otherwise, for low device’s density e.g. $1$ device per 350 $m^{2}$, we observe that communication services only support $19.5\%$ beyond cellular coverage under the requirement of $90\%$ because of only $3$ neighbors at each hop available to help. On the other hand, without geo-controlled NC functionality, the performance degrades dramatically with respect to the increase of distance. Transmission range is then only extended approximately $9.5\%$ beyond cellular coverage constrained by $90\%$ reliability in case of 1 device per 350 $m^{2}$. Specifically, simulation results show that our proposed solution can provide up to $18$ times and $12$ times gain in connectivity if compared to transmission without geo-NC with $95\%$ reliability in cases of high and low device’s densities, respectively.
Conclusions \[sec:CONC\]
========================
In this paper, through analyzing the NC virtualized design in supporting geo-controlled network reliability, we show the applicability of NCFV in improving network reliability beyond cellular coverage where geographical information is the key enabler to support network reliability. This work is the first step towards the integration of NC virtualization into future network systems including cooperative systems with heterogeneous network devices.
Acknowlegment {#acknowlegment .unnumbered}
=============
This research was financially supported by H2020 GEO-VISION - GNSS driven EO and Verifiable Image and Sensor Integration for mission-critical Operational Networks (project reference 641451).
|
---
abstract: 'Let $(X,{\mathcal A},\mu)$ be a probability space and let be a measurable transformation. Motivated by the paper of K. Nikodem \[Czechoslovak Math. J. 41(116) (4) (1991) 565–569\], we concentrate on a functional equation generating measures that are absolutely continuous with respect to $\mu$ and $\varepsilon$-invariant under $S$. As a consequence of the investigation, we obtain a result on the existence and uniqueness of solutions $\varphi\in L^1([0,1])$ of the functional equation $$\varphi(x)=\sum_{n=1}^{N}|f_n''(x)|\varphi(f_n(x))+g(x),$$ where $g\in L^1([0,1])$ and $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ are functions satisfying some extra conditions.'
address:
- |
Instytut Matematyki\
Uniwersytet Śląski\
Bankowa 14, PL-40-007 Katowice\
Poland
- |
Instytut Matematyki\
Uniwersytet Śląski\
Bankowa 14, PL-40-007 Katowice\
Poland
author:
- Janusz Morawiec
- Thomas Zürcher
bibliography:
- 'Invariant.bib'
title: 'An application of functional equations for generating $\varepsilon$-invariant measures'
---
Introduction
============
The aim of this paper is to study the problem of the existence of solutions $\varphi\in L^1(X)$ of the following equation $$\label{E}
\varphi=P\varphi+g,$$ where $g\in L^1(X)$ and $P\colon L^1(X)\to L^1(X)$ are given. In the rest of the introduction, we let $X=[0,1]$ be equipped with the Borel $\sigma$-algebra and the Lebesgue measure. The motivation for studying such a problem is twofold. An original impulse for our investigation came from the paper [@N1991], where integrable solutions of the equation $$\label{e0}
\varphi(x)=\frac{1}{2}\varphi\left(\frac{x}{2}\right)+\frac{1}{2}\varphi\left(\frac{x+1}{2}\right)+g(x)$$ were investigated in connection with $\varepsilon$-invariant measures under the $2$-adic transformation. The next section contains more details on $\varepsilon$-invariant measures and on functional equations associated with them. Let us note here only that equation is a very particular case of an interesting functional equation of the form $$\label{e}
\varphi(x)=\sum_{n=1}^{N}|f_n'(x)|\varphi(f_n(x))+g(x).$$ We will always assume that $N\geq 2$.
The second inspiration to study integrable solutions of equation , and hence also of , is strictly connected with a problem posed by Janusz Matkowski in [@M1985] and a question posed during the 47th International Symposium on Functional Equations by Jacek Wesołowski in connection with probability measures investigated in [@MW2012]. Namely, assume that $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ are strictly increasing contractions satisfying $$\label{c2}
f_n((0,1))\cap f_m((0,1))=\emptyset\quad\hbox{for all }n\neq m$$ and consider the class $\mathcal C$ consisting of all increasing and continuous functions $\phi\colon [0,1]\to [0,1]$ such that $\phi(0)=0$, $\phi(1)=1$ and $$\phi(x)=\sum_{n=1}^{N}\phi(f_n(x))-\sum_{n=1}^{N}\phi(f_n(0)).$$ Wide classes of singular functions belonging to the class $\mathcal C$ were constructed in [@MZ2018; @MZ]. So far we have some idea how singular functions from the class $\mathcal C$ look like, but we do not know much about absolutely continuous functions from $\mathcal C$. However, we know that under a weak assumption on the functions $f_n$, each function belonging to the class $\mathcal C$ can be expressed as a convex combination of absolutely continuous and singular functions from $\mathcal C$. Finally, observe that (again under weak assumptions on the functions $f_n$) absolutely continuous functions belonging to the class $\mathcal C$ are in one-to-one correspondence with densities satisfying with $g=0$.
Preliminaries
=============
In this section we explain more precisely our impulse for studying integrable solutions of equation , as well as of its generalization . We begin with recalling some definitions and results useful in the main part of this paper.
Throughout this paper we assume that $(X,{\mathcal A},\mu)$ is a probability space and $S\colon X\to X$ is a measurable transformation.
In the case where $X$ is a Borel subset of $\mathbb R$ we assume that $\mathcal{A}$ is the $\sigma$-algebra $\mathcal B(X)$ of all Borel subsets of $X$ and $\mu$ is the Lebesgue measure restricted to $\mathcal B(X)$.
We say that $S$ is *nonsingular* if $\mu(S^{-1}(A))=0$ for every $A\in\mathcal A$ such that $\mu(A)=0$. We say that $S$ is *measure preserving* if $\mu(S^{-1}(A))=\mu(A)$ for every $A\in\mathcal A$; we will alternately say that the measure $\mu$ is *invariant* under $S$ if $S$ is measure preserving. Observe that every measure preserving transformation is nonsingular.
Fix a real number $\varepsilon\geq 0$. A probability measure $\nu$ defined on ${\mathcal A}$ is said to be *$\varepsilon$-invariant under $S$* if $$|\nu(S^{-1}(A))-\nu(A)|\leq\varepsilon\mu(A)$$ for every $A\in{\mathcal A}$. It is clear that every measure that is $0$-invariant under $S$ is invariant under $S$, and so the concept of measures $\varepsilon$-invariant under $S$ generalizes the notion of measures invariant under $S$.
From now on, for a nonsingular $S$ we denote by $P_S$ the corresponding *Frobenius–Perron operator*, i.e. $P_S\colon L^1(X)\to L^1(X)$ is the operator uniquely defined by the equation $$\label{int}
\int_A P_Sf(x)d\mu(x)=\int_{S^{-1}(A)}f(x)d\mu(x)\quad\hbox{for every }A\in{\mathcal A}.$$ The operator $P_S$ is linear and continuous. If $S$ is nonsingular, then every $m$-th iterate $S^m$ of $S$ is also nonsingular and the Frobenius-Perron operator corresponding to $S^m$ is the $m$-th iterate $P_S^m$ of $P_S$. Here and throughout we adopt the convention that $P^0={\rm id}_{X}$ for every operator $P\colon L^1(X)\to L^1(X)$. In the case where $X=[0,1]$ and $\mu$ is the one-dimensional Lebesgue measure the Frobenius–Perron operator corresponding to $S$ can be written explicitly as follows $$\label{explicit}
P_Sf(x)=\frac{d}{dx}\int_{S^{-1}([0,x])}f(y)\,dy$$ (see [@LM1994 Formula 1.2.7]).
A useful tool for studying measures $\varepsilon$-invariant under nonsingular $S$ that are absolutely continuous with respect to $\mu$ reads as follows.
\[thm21\] A finite measure $\nu$ that is absolutely continuous with respect to $\mu$ is $\varepsilon$-invariant under a nonsingular $S$ if and only if the Radon–Nikodym derivative $f$ of $\nu$ with respect to $\mu$ satisfies $$|P_Sf(x)-f(x)|\leq\varepsilon\quad\hbox{for $\mu$-almost all }x\in X.$$
In the case where $\varepsilon=0$ in Theorem \[thm21\], we recognize the well know fact saying that an absolutely continuous measure $\nu$ with respect to $\mu$ is invariant under a nonsingular $S$ if and only if the Radon-Nikodym derivative $f$ of $\nu$ with respect to $\mu$ is a fixed point of the Frobenius–Perron operator $P_S$ (see [@LM1994 Theorem 4.1.1]).
From Theorem \[thm21\] we see that to find measures $\varepsilon$-invariant under a nonsingular transformation $S$ it is enough to solve, in the space $L^1(X)$, equation with $P=P_S$ and $|g|\leq\varepsilon$. Therefore, we are interested in finding solutions $\varphi\in L^1(X)$ of the following special case of equation $$\label{PE}
\varphi=P_S\varphi+g.$$ We will do it for a wide class of important transformations. But before we introduce the class, observe that according to a necessary condition for $g\in L^1(X)$ in order that equation has a solution $\varphi\in L^1(X)$ is $$\label{g}
\int_Xg(x)d\mu(x)=0.$$
A measure preserving $S$ such that $S(A)\in\mathcal A$ for every $A\in\mathcal A$ is said to be *exact* if $\lim_{m\to\infty}\mu\big(S^m(A)\big)=1$ for every $A\in\mathcal A$ such that $\mu(A)>0$. The following characterization of exactness is well known.
\[thm22\] A measure preserving $S$ is exact if and only if for every $f\in L^1(X)$ the sequence $(P_S^mf)_{m\in\mathbb N}$ converges in $L^1(X)$ to $\int_Xf(x)d\mu(x)$.
A generalization of a Nikodem result
====================================
We begin this section with a result on the existence of integrable solutions of equation , whose proof is a direct trace of the proof of Theorem 2 from [@N1991], but for the convenience of the readers we repeat it.
\[thm31\] Assume that $g\in L^1(X)$, $S$ is exact and $P_S1=1$. Then equation $\eqref{PE}$ has a solution in $L^1(X)$ if and only if the series $\sum_{m=0}^{\infty}P_S^m g$ converges in $L^1(X)$. Moreover, every solution $\varphi\in L^1(X)$ of equation $\eqref{PE}$ is of the form $$\varphi=\sum_{m=0}^{\infty}P_S^m g+c,$$ where $c$ is a real constant.
Assume first that the series $\sum_{m=0}^{\infty}P_S^m g$ converges in $L^1(X)$. Fix a real constant $c$ and set $\varphi=\sum_{m=0}^{\infty}P_S^m g+c$. The linearity and continuity of $P_S$ jointly with the equality $P_S1=1$ imply $$P_S\varphi+g=\sum_{m=0}^{\infty}P_S^{m+1}g+P_Sc+g=\sum_{m=0}^{\infty}P_S^m g+c=\varphi.$$
Assume now that $\varphi\in L^1(X)$ satisfies . Then, by the linearity of $P_S$, we have $$P_S^k\varphi=P_S^{k+1}\varphi+P_S^kg$$ for every $k\in\mathbb N$. Adding the above equation over $k=0,\ldots,m$ leads to $$\sum_{k=0}^m P_S^k g=\varphi-P_S^{m+1}\varphi.$$ for every $m\in\mathbb N$. Finally, passing with $m$ to $\infty$ and making use of Theorem \[thm22\], we conclude that the series $\sum_{k=0}^\infty P_S^k g$ converges in $L^1(X)$ and that $$\sum_{k=0}^\infty P_S^k g=\varphi-\int_X\varphi(x)d\mu(x),$$ which completes the proof.$\square$
Now we define a transformation $S$ whose Frobenius–Perron operator appears in equation . For this purpose put $X=[0,1]$ and fix strictly monotone functions $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ satisfying . Note that the functions $f_n$ are differentiable almost everywhere on $[0,1]$ as they are monotone. Now define the announced transformation $S\colon[0,1]\to[0,1]$ by putting $$\label{S}
S(x)=\begin{cases}
f_n^{-1}(x)&\hbox{ for }x\in f_n((0,1))\hbox{ and }n\in\{1,\ldots,N\},\\
0&\hbox{ for }x\not\in\bigcup_{n=1}^{N}f_n((0,1)).
\end{cases}$$ Clearly, $S$ is well defined by . Moreover, it is nonsingular provided that all the functions $f_1,\ldots,f_N$ satisfy Luzin’s condition N (or equivalently all the inverses $f_1^{-1},\ldots,f_N^{-1}$ are nonsingular) and $$\label{c1}
\bigcup_{n=1}^{N}f_n([0,1])=[0,1].$$ Applying formula we see that the Frobenius–Perron operator $P_S$ corresponding to a nonsingular $S$ defined by formula is of the form $$\label{PS}
P_S f=\sum_{n=1}^{N}|f_n'|\big(f\circ f_n\big).$$ Therefore, in the case where $S$ is defined by formula equation reduces to equation and Theorem \[thm31\] implies the following result.
\[cor32\] Assume that $g\in L^1([0,1])$ and $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ are strictly monotone nonsingular functions satisfying and $$\label{f'}
\sum_{n=1}^{N}|f_n'(x)|=1\quad\hbox{ for almost all }x\in[0,1].$$ If $S\colon[0,1]\to[0,1]$ defined by formula is exact, then equation has a solution in $L^1([0,1])$ if and only if the series $$\label{formula}
\sum_{m=1}^{\infty}\sum_{n_1,\ldots,n_m=1}^N\left(\prod_{k=1}^{m}|f_{n_k}'\circ f_{n_{k-1}}\circ\cdots\circ f_{n_1}|\right)g\circ f_{n_m}\circ f_{n_{m-1}}\circ\cdots\circ f_{n_1}$$ converges in $L^1([0,1])$. Moreover, every solution $\varphi\in L^1([0,1])$ of equation is of the form $$\varphi=g+\!\sum_{m=1}^{\infty}\sum_{n_1,\ldots,n_m=1}^N\!\!\left(\prod_{k=1}^{m}|f_{n_k}'\!\circ\! f_{n_{k-1}}\!\circ\cdots\circ\! f_{n_1}|\!\!\right)\!g\circ f_{n_m}\circ f_{n_{m-1}}\circ\cdots\circ f_{n_1}+c,$$ where $c$ is a real constant.
There is a known criterion (being very close to a characterization) of exactness of nonsingular transformations (see [@LM1994 Proposition 5.6.2 and Remark 5.6.1 on p. 111[^1]]). A wide class of interesting examples of exact transformations defined by formula , including transformations studied by Rényi in [@R1957] and by Rohlin in [@Ro1961] (see also [@rohlin1964exact]), can be found in [@LM1994 Chapter 6.2]. Now we give one of the simplest possible realization of the assumptions of Corollary \[cor32\]. For this purpose, fix non-zero real numbers $\alpha_1,\ldots,\alpha_N$ such that $$\label{1}
\sum_{n=1}^N|\alpha_n|=1$$ and put $$\beta_n=\sum_{k=1}^{n}|\alpha_k|-\frac{|\alpha_{n}|+\alpha_{n}}{2},$$ $$\label{affine}
f_n(x)=\alpha_n x+\beta_n$$ for all $x\in[0,1]$ and $n\in\{1,\ldots,N\}$. Clearly, implies . Moreover, it is well known that in the considered case the transformation defined by formula is exact (see [@LM1994 Theorem 6.2.1, Definition 5.6.2 and Proposition 5.6.2]). Thus by Corollary \[cor32\] we conclude that any solution $\varphi\in L^1([0,1])$ of the equation $$\label{e1}
\varphi(x)=\sum_{n=1}^{N}|\alpha_n|\varphi\left(\alpha_n x+\beta_n\right)+g(x)$$ is of the form $$\begin{aligned}
\label{sol}
\varphi(x)=&\sum_{m=1}^{\infty}\sum_{n_1,\ldots,n_m=1}^N\left(\prod_{k=1}^{m}|\alpha_{n_k}|\right)g\left(\prod_{i=1}^{m}\alpha_{n_i}x+\sum_{i=1}^{m}\beta_{n_i}\prod_{j=i+1}^{m}\alpha_{n_j}\right)\nonumber\\
&+g(x)+c,\end{aligned}$$ where $c\in\mathbb R$; here we adopt the convention that $\prod_{j=m+1}^{m}a_{j}=1$ for all $m\in\mathbb N$ and $a_j\in\mathbb R$.
Finally, note that Corollary \[cor32\] reduces to Theorem 2 from [@N1991] in the case where $N=2$ and $f_1,f_2$ are given by with $\alpha_1=\alpha_2=\frac{1}{2}$.
Examples
========
In this section we give four examples of measures $\varepsilon$-invariant under a given nonsingular transformation. The first two examples will be constructed by hand, whereas in the next two we will apply Theorem \[thm31\] and Corollary \[cor32\].
We begin with the general observation that every measure invariant under a nonsingular transformation $S$ generates a large class of measures that are $\varepsilon$-invariant under $S$.
\[ex1\] Assume that $S$ is nonsingular and $\mu$ is invariant under $S$. Fix sets $A_1,\ldots,A_m\in\mathcal A$ and real numbers $\varepsilon, \varepsilon_1,\ldots,\varepsilon_m>0$ such that $\varepsilon=\sum_{i=1}^m\varepsilon_i$. Next define a finite measure $\nu$ on $\mathcal A$ by putting $$\nu(A)=\sum_{i=1}^m\varepsilon_i\mu(A\cap A_i).$$ Then observe that for every $A\in{\mathcal A}$ we have $$\begin{aligned}
\left|\nu\big(S^{-1}(A)\big)-\nu(A)\right|&\leq\sum_{i=1}^m\varepsilon_i\left|\mu\big(S^{-1}(A)\cap A_i\big)-\mu(A\cap A_i)\right|\\
&\leq\sum_{i=1}^m\varepsilon_i\max\left\{\mu\big(S^{-1}(A)\cap A_i\big),\mu(A\cap A_i)\right\}\\
&\leq\sum_{i=1}^m\varepsilon_i\max\left\{\mu\big(S^{-1}(A)\big),\mu(A)\right\}=\varepsilon\mu(A).\end{aligned}$$
Note that if $x_1,\ldots,x_m$ are fixed points of a nonsingular $S$, then every convex combination of the Dirac measures $\delta_{x_1},\ldots,\delta_{x_m}$ is a probability measure that is invariant under $S$. Thus Example \[ex1\] shows that any transformation defined by formula with strictly monotone functions $f_1,\ldots,f_N$ satisfying , and sending sets of measure zero to sets of measure zero has $\varepsilon$-invariant measures.
The next example is of similar type as the first one, however the difference is that we will not assume the existence of a measure that is invariant under $S$. It concerns transformations defined by formula with $X=[0,1)$ and $f_n$ of the form .
\[ex2\] Fix positive real numbers $\alpha_1,\ldots,\alpha_N$ satisfying and let for every $n\in\{1,\ldots,N\}$ the function $f_n$ be given by . Then put $X=[0,1)$ and consider the transformation $S\colon [0,1)\to[0,1)$ defined by . Obviously, $S$ is nonsingular.
For all $k\in\mathbb N$ and $n_1,\ldots,n_k\in\{1,\ldots,N\}$ we put $$I_{n_k,\ldots,n_1}=[f_{n_k}\circ\cdots\circ f_{n_1}(0),f_{n_k}\circ\cdots\circ f_{n_1}(1)).$$ A simple induction with the use of the strict increasingness of $f_1,\ldots,f_N$, and shows that for every $k\in\mathbb N$ and $n_1,\ldots,n_k,m_1,\ldots,m_k\in\{1,\ldots,N\}$ we have $$\label{w3}
I_{n_k,\ldots,n_1}\cap I_{m_k,\ldots,m_1}=\emptyset\quad\hbox{ for }(n_k,\ldots,n_1)\neq (m_k,\ldots,m_1),$$ $$\label{w1}
I_{n_k,\ldots,n_1}=\bigcup_{n=1}^{N}I_{n_k,\ldots,n_1,n},$$ $$\label{w2}
\bigcup_{n_1,\ldots,n_k=1}^{N}I_{n_k,\ldots,n_1}=[0,1).$$
Next for every $k\in\mathbb N$ we put $$\mathcal S_k=\big\{I_{n_k,\ldots,n_1}:n_1,\ldots,n_k\in\{1,\ldots,N\}\big\}\quad\hbox{and}\quad \mathcal S_0=\{[0,1)\}.$$ It is easy to see that the family $\mathcal S=\bigcup_{k=0}^\infty\mathcal S_k\cup\{\emptyset\}$ is a semi-algebra of subsets of the interval $[0,1)$; i.e. $[0,1)\in\mathcal S$ and $I,J\in\mathcal S$ implies $I\cap J\in\mathcal S$ and $[0,1)\setminus I$ can be expressed as a finite disjoint union of sets in $\mathcal S$ (see [@P1977 Definition 1.4.1]). Note that the $\sigma$-algebra generated by the semi-algebra $\mathcal S$ coincides with the family ${\mathcal B}([0,1))$ of all Borel subsets of the interval $[0,1)$.
Fix $\varepsilon\in[0,1]$, two different numbers $p,q\in\{1,\ldots,N\}$ and define a function $\xi\colon\{0,\ldots,N\}\to\mathbb R$ by putting $$\xi(n)=\begin{cases}
\varepsilon\min\{\alpha_p,\alpha_q\}&\hbox{ for }n=p,\\
-\varepsilon\min\{\alpha_p,\alpha_q\}&\hbox{ for }n=q,\\
0&\hbox{ for }n\not\in\{p,q\}.
\end{cases}$$ Clearly, $$\label{xi}
\sum_{n=1}^{N}\xi(n)=0.$$ Now we want to define a probability measure $\nu_0$ on $\mathcal S$; i.e. a function $\nu_0\colon\mathcal S\to[0,1]$ such that $\nu_0(\emptyset)=0$ and $\nu_0(\bigcup_{n\in\mathbb N}J_n)=\sum_{n\in\mathbb N}\nu_0(J_n)$ for all pairwise disjoint elements $(J_n)_{n\in\mathbb N}$ of $\mathcal S$ with $\bigcup_{n\in\mathbb N}J_n\in\mathcal S$ (see [@P1977 Section 2.3]). We will do it inductively.
In the first step we define $\nu_0$ on $\mathcal S_0\cup\mathcal S_1$ by putting $$\nu_0([0,1))=1\quad \quad\hbox{ and }\quad\nu_0(I_n)=\alpha_n+\xi(n).$$ It is clear that $\nu_0(I_n)\in[0,1]$ for every $I_n\in\mathcal S_1$. Moreover, and imply $$\label{step1}
\nu_0\left([0,1)\right)=1=\sum_{n=1}^{N}\alpha_n=\sum_{n=1}^{N}\nu_0\left(I_{n}\right),$$ and we see (according to and with $k=1$) that $\nu_0$ is well defined on $\mathcal S_0\cup\mathcal S_1$.
To do the inductive step we assume that for a fixed $k\in\mathbb N$ we have defined $\nu_0$ on $\bigcup_{n=0}^{k}\mathcal S_k$. Now we define $\nu_0$ on $\mathcal S_{k+1}$ by putting $$\nu_0(I_{n_{k+1},\ldots,n_1})=\big(\alpha_{n_{k+1}}+\xi(n_{k+1})\big)\prod_{i=1}^{k}\alpha_{n_i}.$$ (Note that adopting the convention that $\prod_{i=1}^{0}\alpha_{n_i}=1$ the above formula for $\nu_0$ coincides with that from the first step.) Applying we obtain $$\label{suma}
\sum_{n=1}^{N}\nu_0(I_{n_{k},\ldots,n_1,n})=\big(\alpha_{n_k}+\xi(n_k)\big)\prod_{i=1}^{k-1}\alpha_{n_i}\sum_{n=1}^{N}\alpha_n=\nu_0(I_{n_{k},\ldots,n_1})$$ for every $I_{n_{k},\ldots,n_1}\in\mathcal S_k$, which means (according to and ) that $\nu_0$ is well defined on $\bigcup_{n=0}^{k+1}\mathcal S_k$.
Finally, putting $\nu_0(\emptyset)=0$ we have defined $\nu_0$ on $\mathcal S$.
It remains to prove that $\nu_0$ is $\sigma$-additive. For this purpose fix a pairwise disjoint sequence $(J_m)_{m\in\mathbb N}$ of elements of the semi-algebra $\mathcal S$ such that $\bigcup_{m\in\mathbb N}J_m=J\in\mathcal S$. It simplifies the argument, and causes no loss of generality, to assume $J=[0,1)$. Thus we need to show that $\sum_{m\in\mathbb N}\nu_0(J_m)=1$.
Define a nondecreasing sequence $(k_m)_{m\in\mathbb N}$ of integers in such a way that $\{J_1,\ldots,J_m\}\subset\bigcup_{k=1}^{k_m}\mathcal S_k$ for every $m\in\mathbb N$. Note that for all $m\in\mathbb N$, $l\in\{1,\ldots,m\}$ and $I\in \mathcal S_{k_m}$, we have either $J_l\cap I=\emptyset$ or $J_l\cap I=I$. Next for every $m\in\mathbb N$ put $$\mathcal D_m=\left\{I\in \mathcal S_{k_m}: I\subset\bigcup_{l=1}^{m}J_l\right\}\quad\hbox{ and }\quad
d_m=\sum_{I\in\mathcal S_{k_m}\setminus\mathcal D_m}\nu_0(I).$$ Making use of , and we conclude that $$\sum_{l=1}^m\nu_0(J_l)=\sum_{I\in \mathcal D_m}\nu_0(I)=\sum_{I\in\mathcal S_{k_m}}\nu_0(I)-d_m=1-d_m.$$ Now it is enough to show that $$\label{dm}
\lim_{m\to\infty}d_m=0.$$
By the definition of $\nu_0$ it is easy to see that for every $I\in\mathcal S$ we have $$\nu_0(I)\leq 2l(I);$$ here and later on the symbol $l$ denotes the Lebesgue measure on the real line. This jointly with yields $$d_m\leq 2\sum_{I\in\mathcal S_{k_m}\setminus\mathcal D_m}l(I)=2l\left(\bigcup_{I\in\mathcal S_{k_m}\setminus\mathcal D_m}I\right)=2l\left(\bigcup_{l=m+1}^{\infty} J_l\right).$$ Passing with $m$ to $\infty$ we get .
Thus we have proved that $\nu_0$ is a probability measure defined on the semi-algebra $\mathcal S$.
Extend $\nu_0$ to a probability measure $\nu\colon\mathcal B([0,1))\to[0,1]$; such an extension exists and it is unique (see [@P1977 Corollary 2.4.9 and Proposition 2.5.1]).
Now we will show that the measure $\nu$ is $\varepsilon$-invariant under $S$.
For this purpose fix $I_{n_k,\ldots,n_1}\in \mathcal{S}_k$. Then $$S^{-1}(I_{n_k,\ldots,n_1})=\bigcup_{n=1}^{N}I_{n,n_k,\ldots,n_1},$$ which jointly with , and implies $$\begin{aligned}
\nu\big(S^{-1}(I_{n_k,\ldots,n_1})\big)&=\sum_{n=1}^{N}\nu_0(I_{n,n_k,\ldots,n_1})=\prod_{i=1}^{k}\alpha_{n_i}\sum_{n=1}^{N}\big(\alpha_{n}+\xi(n)\big)\\
&=\prod_{i=1}^{k}\alpha_{n_i}=\nu(I_{n_k,\ldots,n_1})-\xi(n_k)\prod_{i=1}^{k-1}\alpha_{n_i}.\end{aligned}$$ In consequence, $$\label{nuS}
\left|\nu\big(S^{-1}(I_{n_k,\ldots,n_1})\big)-\nu(I_{n_k,\ldots,n_1})\right|=|\xi(n_k)|\prod_{i=1}^{k-1}\alpha_{n_i}\leq\varepsilon l(I_{n_{k},\ldots,n_1}).$$
Fix a set $B\in{\mathcal B}([0,1))$, a number $\delta>0$ and choose a countable family $\{F_j:j\in J\}$ of pairwise disjoint elements of the semi-algebra $\mathcal S$ such that $\bigcup_{j\in J}F_j\subset B$, $$\left|\nu(B)-\nu\left(\bigcup_{j\in J}F_j\right)\right|\leq\delta
\;\:\hbox{and}\;\:
\left|\nu(S^{-1}(B))-\nu\left(S^{-1}\left(\bigcup_{j\in J}F_j\right)\right)\right|\leq\delta;$$ such a family exists, because on any complete separable metric space any finite Borel measure is regular (see [@Dud2002 Theorem 7.1.4]). Then making use of we get $$\begin{aligned}
\left|\nu\big(S^{-1}(B)\big)-\nu(B)\right|&\leq
\sum_{j\in J}\left|\nu\left(S^{-1}(F_j)\right)-\nu(F_j)\right|+2\delta\\
&\leq\varepsilon\sum_{j\in J} l(F_j)+2\delta
\leq\varepsilon l(B)+2\delta.\end{aligned}$$ Finally, tending with $\delta$ to $0$ we conclude that the measure $\nu$ is $\varepsilon$-invariant under $S$.
To give the next two examples we need the following observation.
\[rem43\]$ $
1. If $\varphi$ solves , then $\int_{X}g(x)d\mu(x)=0$.
2. If $P_Sg=0$, then $\int_{X}g(x)d\mu(x)=0$.
3. If $P_Sg=0$, then $P_S^mg=0$ for every $m\in\mathbb N$.
The first two statements are an immediate consequence of , whereas the third one follows from the linearity of $P_S$.$\square$
Remark \[rem43\] helps us to apply Theorem \[thm31\]. Indeed, assertion (i) says that to find an integrable solution of we must assume that $g$ has integral equals zero over $X$. In view of assertion (ii) it can be realized by choosing $g$ in such a way that $P_Sg=0$. Having chosen such a $g$, assertion (iii) implies the convergence of the series $\sum_{m=0}^{\infty}P_S^m g$ to $g$. Concluding, if we fix $\varepsilon\in[0,1]$ and $g\in L^1(X)$ such that $|g|\leq\varepsilon$ and $P_Sg=0$, then $g+1$ is the density of a probability measure that is $\varepsilon$-invariant under $S$, i.e. the formula $$\label{nu}
\nu(A)=\mu(A)+\int_{A}g(x)d\mu(x)\quad\hbox{for every }A\in\mathcal A$$ defines a probability measure that is $\varepsilon$-invariant under $S$.
To see that in many cases $g$ can be fixed in such a way that $|g|\leq\varepsilon$ and $P_Sg=0$, we consider in the next two examples the case where $X=[0,1]$ and $S$ is defined by formula .
\[ex3\] Fix strictly monotone functions $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ satisfying , , . We assume that $f_N(0)\geq f_n(1)$ for $n\leq N-1$ and further that $|f_N'|\geq\frac{1}{2}$. Then choose an integrable function $g_0\colon [0,f_{N}(0)]\to[0,\varepsilon]$ and extend it to an integrable function $g\colon[0,1]\to[0,\varepsilon]$ by putting $g=-\frac{1}{|f_N'|}\sum_{n=1}^{N-1}|f_n'|\big(g_0\circ f_n\circ f_N^{-1}\big)$ on $(f_{N}(0),1]$. Since in the considered case $P_S$ is of the form , we have $P_Sg=\sum_{n=1}^{N}|f_n'|\big(g\circ f_n\big)=0$.
The last example shows not only how to choose a $g$ with $|g|\leq\varepsilon$ and $P_Sg=0$, but how to choose such a $g$ to calculate the integral in .
\[ex4\] Put $X=[0,1]$ and let $S$ be defined by formula with the $f_n$ given by , where $\alpha_1,\ldots,\alpha_N$ are non-zero real numbers satisfying . Fix $\varepsilon\in[0,1]$ and real numbers $\gamma_1,\ldots,\gamma_{N}\in[-\varepsilon,\varepsilon]$ such that $\sum_{n=1}^N |\alpha_n|\gamma_n=0$. Choose $g$ to be a constant equal to $\gamma_n$ on every interval $I_n=(\min\{f_n(0),f_n(1)\},\max\{f_n(0),f_n(1)\})$. Then $P_Sg=\sum_{n=1}^N|\alpha_n|\gamma_n=0$. Finally, according to we conclude that the formula $$\nu(A)=\sum_{n=1}^N(1+\gamma_n)l(A\cap I_n)$$ defines a Borel probability measure that is $\varepsilon$-invariant under $S$.
Further results
===============
We begin this section with a generalization of Theorem \[thm31\]. To formulate the result, we recall some definitions.
A linear operator $P\colon L^1(X)\to L^1(X)$ is said to be a *Markov operator* if $Pf\geq 0$ and $\|Pf\|=\|f\|$ for every $f\in L^1(X)$ such that $f\geq 0$. It is easy to see that every Frobenius–Perron operator is a special type of Markov operator. We say that a sequence $(f_m)_{m\in\mathbb N}$ of functions from $L^1(X)$ is *weakly Cesàro convergent* to a function $f\in L^1(X)$ if $$\lim_{m\to\infty}\frac{1}{m}\sum_{k=1}^m\int_Xf_k(x)h(x)d\mu(x)=\int_Xf(x)h(x)d\mu(x)$$ for every $h\in L^{\infty}(X)$. A Markov operator $P\colon L^1(X)\to L^1(X)$ such that $P1=1$ is said to be *ergodic* if for every density $f\in L^1(X)$ the sequence $(P^mf)_{m\in\mathbb N}$ is weakly Cesàro convergent to $1$. Note that if $P\colon L^1(X)\to L^1(X)$ is a Markov operator and $\varphi\in L^1(X)$ is a solution of equation , then $$\int_Xg(x)d\mu(x)=\int_X\varphi(x)d\mu(x)-\int_XP\varphi(x)d\mu(x)=0.$$ Thus condition is necessary for $g\in L^1(X)$ in order that equation has a solution in $L^1(X)$.
\[thm51\] Assume and let $P\colon L^1(X)\to L^1(X)$ be an ergodic Markov operator. Then equation has a solution in $L^1(X)$ if and only if the sequence $\big(\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g\big)_{m\in\mathbb N}$ converges in $L^1(X)$. Moreover, every solution $\varphi\in L^1(X)$ of equation is of the form $$\varphi=\lim_{m\to\infty}\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g+c,$$ where $c$ is a real constant.
Since $P$ is an ergodic Markov operator, it follows that $1$ is the unique density such that $P1=1$; indeed, assuming that there exists another density $f\in L^1(X)$ such that $Pf=f$, we would have $$\begin{aligned}
\int_Xf(x)h(x)d\mu(x)&=\lim_{m\to\infty}\frac{1}{m}\sum_{k=1}^m\int_XP^kf(x)h(x)d\mu(x)=\int_Xh(x)d\mu(x)\end{aligned}$$ for every $h\in L^\infty(X)$, which is impossible in the case where $f\neq 1$. Now from [@LM1994 Theorem 5.2.2] (see also Proposition 5.2.1 in the same source) we conclude that for every density $f\in L^1(X)$ the sequence $\big(\frac{1}{m}\sum_{k=1}^{m}P^kf\big)_{m\in\mathbb N}$ converges to $1$ in $L^1(X)$, and by the linearity of $P$ we deduce that for every $f\in L^1(X)$ the sequence $\big(\frac{1}{m}\sum_{k=1}^{m}P^kf\big)_{m\in\mathbb N}$ converges to $\int_Xf(x)d\mu(x)$ in $L^1(X)$. In particular, making use of , that is that the integral of $g$ over $X$ vanishes, we obtain $$\label{Pg}
\lim_{m\to\infty}\frac{1}{m}\sum_{k=1}^{m}P^kg=0.$$
Assume first that the sequence $\big(\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g\big)_{m\in\mathbb N}$ converges in $L^1(X)$. Fix a real constant $c$ and set $\varphi=\lim_{m\to\infty}\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g+c$. The linearity and continuity of $P$ jointly with the equality $P1=1$ and imply $$\begin{aligned}
P\varphi+g&=\lim_{m\to\infty}\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k+1}g+Pc+g\\
&=\lim_{m\to\infty}\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g+\lim_{m\to\infty}\frac{1}{m}\sum_{k=1}^{m}P^{k}g+c=\varphi.\end{aligned}$$
Assume now that $\varphi\in L^1(X)$ satisfies . Then, by the linearity of $P$, we have $$\frac{m-k}{m}P^k\varphi=\frac{m-k}{m}P^{k+1}\varphi+\frac{m-k}{m}P^kg$$ for all $m\in\mathbb N$ and $k\in\{0,\ldots,m-1\}$. Adding the above equation over $k=0,\ldots,m-1$ with fixed $m$, leads to $$\begin{aligned}
\varphi-\frac{1}{m}\sum_{k=1}^{m}P^k\varphi=\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g\end{aligned}$$ for every $m\in\mathbb N$. Finally, passing with $m$ to $\infty$ and making use of the fact that the sequence $\big(\frac{1}{m}\sum_{k=1}^{m}P^k\varphi\big)_{m\in\mathbb N}$ converges to $\int_X\varphi(x)d\mu(x)$ in $L^1(X)$, we conclude that the sequence $\big(\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g\big)_{m\in\mathbb N}$ converges in $L^1(X)$ and that $$\begin{aligned}
\varphi-\int_X\varphi(x)d\mu(x)=\lim_{m\to\infty}\sum_{k=0}^{m-1}\frac{m-k}{m}P^{k}g,\end{aligned}$$ which completes the proof. $\square$
Theorem \[thm51\] generalizes Theorem \[thm31\] in two directions, because there are Markov operators that are not Frobenius–Perron operators and there are transformations $S$ that are not exact, but the corresponding Frobenius–Perron operators are ergodic. For example, it is easy to see that the operator defined in Section 3 by formula can be ergodic, but not exact. Moreover, it fails to be a Frobenius–Perron operator in the case where at least one of the functions $f_n$ does not satisfy the Luzin’s condition N, but it is still a Markov operator in such a case.
The second result of this section shows that it can happen that equation can have exactly one integrable solution; note that in such a case we must have $P1\neq 1$.
\[thm52\] Assume that the operator $P\colon L^1(X)\to L^1(X)$ is linear and continuous such that for every density $f\in L^1(X)$ the sequence $(P^mf)_{m\in\mathbb N}$ converges to the trivial function in $L^1(X)$. Then equation has a solution in the space $L^1(X)$ if and only if the series $\sum_{m=0}^{\infty}P^m g$ converges in $L^1(X)$. Moreover, every solution $\varphi\in L^1(X)$ of equation is of the form $$\label{varphi}
\varphi=\sum_{m=0}^{\infty}P^m g.$$
By the linearity of $P$ it is easy to see that the sequence $(P^mf)_{m\in\mathbb N}$ converges to the trivial function in $L^1(X)$ for every $f\in L^1(X)$.
Assume first that the series $\sum_{m=0}^{\infty}P_S^m g$ converges in $L^1(X)$. Setting $\varphi=\sum_{m=0}^{\infty}P^m g$ and applying the linearity and continuity of $P$ we obtain $$P\varphi+g=\sum_{m=0}^{\infty}P^{m+1}g+g=\sum_{m=0}^{\infty}P^m g=\varphi.$$
Assume now that $\varphi\in L^1(X)$ satisfies . By the linearity of $P$ we have $P^k\varphi=P^{k+1}\varphi+P^kg$ and hence $$\sum_{k=0}^m P^k g=\varphi-P^{m+1}\varphi$$ for every $m\in\mathbb N$. Passing with $m$ to $\infty$ we deduce that the series $\sum_{k=0}^\infty P^k g$ converges in $L^1(X)$ and that holds.$\square$
To give an example of a realization of the assumptions of Theorem \[thm52\] fix, to the end of this paper, strictly monotone functions $f_1,\ldots,f_N\colon[0,1]\to[0,1]$ satisfying condition and consider the operator $P_0\colon L^1([0,1])\to L^1([0,1])$ defined by $$P_0f=\sum_{n=1}^{N}|f_n'|\big(f\circ f_n\big).$$ Obviously, $P_0$ is linear. To see that $P_0$ is continuous note that yields $$\label{p}
\int_AP_0f(x)dx=\sum_{n=1}^{N}\int_{A}|f_n'(x)|f(f_n(x))dx=\int_{\bigcup_{n=1}^{N}f_n(A)}f(y)dy$$ for all nonnegative $f\in L^1([0,1])$ and Lebesgue measurable sets $A\subset[0,1]$.
Assume now that the family $\{f_1,\ldots,f_N\}$ forms an iterated function system and let $A_*$ be its *attractor*, i.e. $$A_*=\bigcap_{m\in\mathbb N}A_m,$$ where $A_0=[0,1]$ and $A_{m}=\bigcup_{n=1}^{N}f_n(A_{m-1})$ for every $m\in\mathbb N$. Fix a nonnegative $f\in L^1([0,1])$. According to we have $$\|P_0^mf\|=\int_{A_m}f(y)dy$$ for every $m\in\mathbb N$, and as the sequence $(A_m)_{m\in\mathbb N}$ is descending we get $$\lim_{m\to\infty}\|P_0^mf\|=\int_{A_*}f(y)dy.$$ In consequence, we have proved the following lemma.
\[lem53\] If the family $\{f_1,\ldots,f_N\}$ forms an iterated function system with the attractor of Lebesgue measure zero, then for every nonnegative $f\in L^1([0,1])$ the sequence $(P_0^mf)_{m\in\mathbb N}$ converges to the trivial function in $L^1([0,1])$.
As an immediate consequence of Theorem \[thm52\] and Lemma \[lem53\], we obtain the following result.
\[cor54\] Assume that the family $\{f_1,\ldots,f_N\}$ forms an iterated function system and let its attractor have Lebesgue measure zero. Then equation has a solution in $L^1([0,1])$ if and only if the series $\sum_{m=0}^{\infty}P_0^m g$ converges in $L^1([0,1])$. Moreover, every solution $\varphi\in L^1([0,1])$ of equation is of the form $$\varphi=\sum_{m=0}^{\infty}P_0^m g.$$
Observe that assumptions of Corollary \[cor54\] are satisfied if the $f_n$ are defined by with real numbers $\alpha_1,\ldots,\alpha_N$ and non-zero real numbers $\beta_1,\ldots,\beta_N$ such that $$\begin{aligned}
\label{1a}
0&\leq\min\{\beta_1,\alpha_1+\beta_1\}<\max\{\beta_1,\alpha_1+\beta_1\}\leq\min\{\beta_2,\alpha_2+\beta_2\}\\
&<\max\{\beta_2,\alpha_2+\beta_2\}\leq\cdots\leq\min\{\beta_N,\alpha_N+\beta_N\}\\
&<\max\{\beta_N,\alpha_N+\beta_N\}\leq 1\end{aligned}$$ and $$\label{1b}
\bigcup_{n=1}^{N}[\min\{\beta_n,\alpha_{n}+\beta_{n}\},\max\{\beta_n,\alpha_{n}+\beta_{n}\}]\neq[0,1].$$ It is easy to see that the family $\{f_0,\ldots,f_N\}$ forms an iterated function system and its attractor $A_*$ has Lebesgue measure zero. Thus by Corollary \[cor54\] we conclude (in contrast to the counterpart case from Section 3) that now equation has exactly one solution $\varphi\in L^1([0,1])$ and it is of the form with $c=0$.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was supported by the University of Silesia Mathematics Department (Iterative Functional Equations and Real Analysis program).
[^1]: There are actually two remarks numbered 5.6.1 in the book.
|
---
abstract: 'The electronic properties of pentavalent-ion (Nb$^{5+}$, Ta$^{5+}$, and I$^{5+}$) doped anatase and rutile TiO$_2$ are studied using spin-polarized GGA+*U* calculations. Our calculated results indicate that these two phases of TiO$_2$ exhibit different conductive behavior upon doping. For doped anatase TiO$_2$, some up-spin-polarized Ti 3*d* states lie near the conduction band bottom and cross the Fermi level, showing an *n*-type half-metallic character. For doped rutile TiO$_2$, the Fermi level is pinned between two up-spin-polarized Ti 3*d* gap states, showing an insulating character. These results can account well for the experimental different electronic transport properties in Nb (Ta)-doped anatase and rutile TiO$_2$.'
author:
- Kesong Yang
- Ying Dai
- Baibiao Huang
- Yuan Ping Feng
title: 'Origin of the different conductive behavior in pentavalent-ion-doped anatase and rutile TiO$_2$'
---
Transparent conducting oxides (TCOs) have many applications in optoelectronic devices such as flat panel displays, organic light-emitting diodes and solar cells. As one of the TCOs, Sn-doped In$_2$O$_3$ is widely used because of its excellent optical transparency and electrical transport property.[@Edwards2004_DaltonTran] However, owning to the high cost of indium and the increasing demand for high-performance TOCs, many efforts have been made to develop new TOCs materials.[@Tadatsugu2005SST; @Wang2010JAP] Recently, as one potential candidate of TCOs, Nb (Ta)-doped anatase TiO$_2$ has attracted lots of attention because of its high electrical conductivity and optical transparency.[@Furubayashi2005APL; @Hitosugi2005JJAP; @Furubayashi2005TSF; @Furubayashi2007JAP; @Gillispie2007JAP; @Gillispie2007JMR; @Hoang2008APE; @Archana2011APL] However, the origin of its high conductivity is still controversial. Wan *et al.*[@Wan2006APL] found that Nb-doped TiO$_2$ grown on (0001) Al$_2$O$_3$ substrate shows a much larger resistivity than that grown on (100) SrTiO$_3$ substrate, and hence the Nb diffusion into the SrTiO$_3$ substrate is thought to lead to the high conductivity of Nb-doped TiO$_2$. Meanwhile, Furubayashi *et al.*[@Furubayashi2006APL] confirmed that Nb-doped TiO$_2$ forms a rutile phase on the (0001) Al$_2$O$_3$ substrate but an anatase phase on the other substrates, and hence they suggested that the higher resistivity of Nb-doped TiO$_2$ grown on (0001) Al$_2$O$_3$ substrates is caused by the formation of rutile phase. Interestingly, later experiments further verified that Nb (Ta)-doped anatase TiO$_2$ is metallic but Nb (Ta)-doped rutile TiO$_2$ is insulating.[@Zhang2007JAP; @Barman2011APL] Therefore, one may speculate that the Nb (Ta) doping can lead to different conductive properties in anatase and rutile TiO$_2$, i.e., conductive for anatase phase but insulating for rutile phase.[@Yang2009ICMAT]
In principle, a pentavalent dopant, such as Nb$^{5+}$, Ta$^{5+}$, and I$^{5+}$,[@Yang_2008_CM; @Yang2012REVIEW] can release one additional electron into TiO$_2$ than a Ti$^{4+}$, and introduce donor levels.[@Finazzi_2008_JCP; @Hitosugi2008APE] Although Nb-doped anatase TiO$_2$ has been studied using standard density functional theory (DFT) calculations,[@Hitosugi2008APE; @Liu2008APL; @Kamisaka2009JCP] the standard DFT calculations within either local density approximation (LDA) or generalized gradient approximation (GGA) cannot properly deal with the strong-corrected Ti 3*d* electrons.[@Finazzi_2008_JCP; @Yang_2010_PRB_TiO] In the present work, we studied the electronic properties of pentavalent-ion (Nb$^{5+}$, Ta$^{5+}$, and I$^{5+}$) doped anatase and rutile TiO$_2$, respectively, using spin-polarized GGA+*U* calculations which can give a proper description of Ti 3*d* orbitals.[@Finazzi_2008_JCP; @Yang_2010_PRB_TiO; @Yang2009CPC]. Our calculations indicate that Nb (Ta, I)-doped anatase TiO$_2$ shows an *n*-type half-metallic character, while Nb (Ta, I)-doped rutile TiO$_2$ shows an insulating character. These results give a good explanation for experimentally observed different conductive behavior in Nb (Ta)-doped TiO$_2$.
The spin-polarized GGA+*U* electronic structure calculations were carried out using the Vienna *ab-inito* simulation package (VASP).[@VASP_PRB; @VASP_CMS] 108-atom $3\times 3\times 3$ supercell of the anatase phase and 72-atom $2\times 2\times 3$ supercell of the rutile phase are used to model Nb (Ta, I)-doped TiO$_2$, in which a Ti atom is substituted by a Nb (Ta, I) atom. The projector augmented wave (PAW) potentials are used to treat electron-ion interactions and generalized gradient approximation parameterized by Perdew and Wang (PW91) are used for electron exchange-correction functional.[@PAW; @PW91] A cut-off energy of 500 eV and a $3\times 3\times 3$ *k*-point mesh centered at $\Gamma$ point are used. The lattice parameters and all the atomic positions are fully relaxed until all components of the residual forces are smaller than 0.01 eV/Å. In our GGA+*U* calculations, the on-site effective *U* parameter (*U*$_{eff}$=*U*-*J*=5.8 eV) proposed by Dudarev *et al*. is adopted for Ti 3*d* electrons.[@Dudarev1998PRB; @Anisimov1991PRB]
![(Color online) Calculated (a) TDOS and PDOS plots for (b) Ti 3*d* and (c) Nb 4*d* (Ta 5*d* ) states for Nb (Ta)-doped anatase TiO$_2$. The Fermi level is indicated by the vertical dot line at 0 eV. []{data-label="f1"}](Fig1.eps)
![(Color online) Calculated (a) TDOS and PDOS plots for (b) Ti 3*d* and (c) Nb 4*d* (Ta 5*d* ) states for Nb (Ta)-doped rutile TiO$_2$. The Fermi level is indicated by the vertical dot line at 0 eV. []{data-label="f2"}](Fig2.eps)
To examine the substitutional Nb (Ta, I) doping effects on the electronic property of TiO$_2$, we calculated the total density of states (TDOS) and partial density of states (PDOS) for Nb (Ta)-doped anatase (see Fig \[f1\]) and rutile (see Fig \[f2\]), and the TDOS and PDOS for I-doped anatase (see Fig \[f3\]) and rutile (see Fig \[f4\]). For Nb (Ta)-doped anatase TiO$_2$, the calculated TDOS shows that it is spin-polarized, and some up-spin-polarized gap states extend from the conduction band (CB) into the band gap. These up-spin-polarized gap states are located just below the CB bottom, and cross the Fermi level, indicating an *n*-type half-metallic character. This is in good agreement with the experimentally observed excellent conductive property in Nb (Ta)-doped anatase TiO$_2$.[@Furubayashi2005APL; @Hitosugi2005JJAP; @Furubayashi2005TSF; @Furubayashi2007JAP; @Gillispie2007JAP; @Gillispie2007JMR; @Hoang2008APE; @Archana2011APL] The PDOS shows the Nb 4*d* (Ta 5*d* ) states mix with the Ti 3*d* states in the whole CB, indicating that the Nb (Ta) dopant forms a strong Nb (Ta)-O bond. It is also noted that the up-spin Ti 3*d* orbitals strongly hybridize with the Nb 4*d* (Ta 5*d* ) orbitals, and as will be further discussed below, the Ti 3*d* orbitals mostly contribute to the up-spin gap states. For Nb (Ta)-doped rutile phase (see Fig. \[f2\]), as in the case of doped anatase, Nb 4*d* (Ta 5*d* ) states spread over the whole CB, and two fully spin-polarized gap states with a gap about 1.1 eV lie in the band gap. Its PDOS shows that these two gap states are also mainly contributed by the Ti 3*d* states, however, the Fermi level lies between the two gap states, showing an insulating character. These calculated results can account for the experimentally observed much higher resistivity in Nb (Ta)-doped rutile phase than the anatase phase.[@Wan2006APL; @Furubayashi2006APL; @Zhang2007JAP; @Barman2011APL]
![(Color online) Calculated (a) TDOS and PDOS plots for (b) Ti 3*d* and (c) I 5*s*/5*p* states for I-doped anatase TiO$_2$. The Fermi level is indicated by the vertical dot line at 0 eV. []{data-label="f3"}](Fig3.eps)
![(Color online) Calculated (a) TDOS and PDOS plots for (b) Ti 3*d* and (c) I 5*s*/5*p* states for I-doped rutile TiO$_2$. The Fermi level is indicated by the vertical dot line at 0 eV. []{data-label="f4"}](Fig4.eps)

In addition, it is worth mentioning that our calculated results are different from previous GGA+*U* calculations done by Morgan *et al.*,[@Morgan2009JMC] in which the Nb (Ta)-doped anatase and rutile TiO$_2$ both show insulating properties. This discrepancy may be attributed to the following two reasons: (1) Morgan *et al.* applied a *U* parameter of 4.2 eV for Ti 3*d* electrons, which is much smaller than the used value of 5.8 eV in this work. (2) In our GGA+*U* calculations, all the degrees of the freedom for Nb (Ta)-doped TiO$_2$, including lattice parameters and all the atomic positions, are fully relaxed. In contrast, in Morgan *et al.*’s calculations, only the internal degrees of freedom are allowed to relax. Therefore, the difference in *U* values and structural optimization methods may be responsible for the different electronic properties of Nb (Ta)-doped TiO$_2$ in our calculations from those of Margan *et al.* In fact, Orita also found a metallic character in Nb-doped anatase TiO$_2$ using GGA+*U* calculations,[@Orita2010JJAP] in which all the atomic coordinates and lattice constants are optimized. In particular, recent hybrid density functional calculations also show that Nb (Ta)-doped anatase TiO$_2$ is metallic, while Nb (Ta)-doped rutile TiO$_2$ is semiconducting.[@Yamamoto2012PRB] In summary, these related studies further confirm our GGA+*U* calculations. As a consequence, we can conclude that the conducting character of Nb (Ta)-doped anatase and the insulating character of Nb (Ta)-doped rutile are their intrinsic properties.
For I-doped anatase and rutile TiO$_2$, similar to the case of Nb (Ta) doping, an *n*-type half-metallic character occurs in I-doped anatase phase but an insulating character occurs in I-doped rutile phase. The calculated TDOS and PDOS are shown in Fig \[f3\] for anatase phase and Fig \[f4\] for rutile phase. However, different from that of Nb (Ta) doping, a double filled gap state mostly consisting of I 5*s* orbital appears in the band gap, which is located just above the valence band maximum. This indicates that I dopant exists as I$^{5+}$ (5*s*$^2$5*p*$^0$) in TiO$_2$, which is consistent with the standard GGA calculations.[@Yang_2008_CM]
To understand the origin of the different conductive behavior associated with the spin-polarized Ti 3*d* states in Nb (Ta, I)-doped anatase and rutile TiO$_2$, we take I-doped TiO$_2$ as an example to show its three-dimensional spin density distribution in Fig. \[f5\]. For I-doped anatase TiO$_2$, its spin density mostly comes from the four equivalent second-nearest Ti ions around the I dopant (see in Fig. \[f5\]a). These four equivalent Ti ions share one electron donated by one I$^{5+}$ ion with their Ti 3*d* orbitals, and thus produce a total spin magnetic moment of 1.0 $\mu_B$. Therefore, to a first approximation, these four equivalent Ti ions should exist as Ti$^{+3.75}$ (*d*$^{1/4}$). Actually, experimental core-level photoemission spectra measurements showed a minor peak of the binding energy below that of Ti$^{4+}$ ion,[@Hitosugi2008APE] and this chemical shift corresponds to an increase of the valence electron density on Ti 3*d* orbitals. For I-doped rutile TiO$_2$, in contrast, its spin density is mainly contributed by the two equivalent second-nearest Ti ions (see Fig \[f5\]b). These two Ti ions share one electron and produce a total spin magnetic moment of 1.0 $\mu_B$, and thus one can assume that the two equivalent Ti ions exist as Ti$^{+3.5}$ (*d*$^{1/2}$). As a result, it is expected that a lower binding energy of Ti$^{+3.5}$ ions than that of Ti$^{+3.75}$ ions can be observed in Nb (Ta, I)-doped rutile TiO$_2$ through experimental core-level photoemission spectra measurements. Furthermore, the increasing of the electron density on Ti 3*d* orbitals directly leads to the splitting between Ti 3*d* occupied states and unoccupied states, which is responsible for the insulating character of I-doped rutile TiO$_2$. Similar spin density distributions also occur in Nb (Ta)-doped anatase and rutile TiO$_2$.
In summary, we studied the electronic properties of Nb (Ta, I)-doped anatase and rutile TiO$_2$ by spin-polarized GGA+*U* calculations. In doped anatase TiO$_2$, the Fermi level is pinned in some up-spin-polarized gap states near the CBM, showing an *n*-type half-metallic conductive property. In doped rutile, in contrast, two localized states with a gap about 1.1 eV are introduced in the band gap, and the Fermi level lies between them, showing an insulating character. Therefore, to prepare the TiO$_2$-based TCOs through pentavalent-ion-doping, it is essential to avoid the phase transition from anatase to rutile. Our theoretical calculations may provide some useful guidance to develop TiO$_2$-based TCOs.
This work is supported by the National Basic Research Program of China (973 program, 2007CB613302), National Science foundation of China under Grant 11174180 and 20973102, and the Natural Science Foundation of Shandong Province under Grant number ZR2011AM009. Y.P.F is thankful for the support of the Singapore National Research Foundation Competitive Research Program (Grant No. NRF-G-CRP 2007-05).
[33]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
|
---
abstract: 'Generative adversarial imitation learning (GAIL) is a popular inverse reinforcement learning approach for jointly optimizing policy and reward from expert trajectories. A primary question about GAIL is whether applying a certain policy gradient algorithm to GAIL attains a global minimizer (i.e., yields the expert policy), for which existing understanding is very limited. Such global convergence has been shown only for the linear (or linear-type) MDP and linear (or linearizable) reward. In this paper, we study GAIL under general MDP and for nonlinear reward function classes (as long as the objective function is strongly concave with respect to the reward parameter). We characterize the global convergence with a sublinear rate for a broad range of commonly used policy gradient algorithms, all of which are implemented in an alternating manner with stochastic gradient ascent for reward update, including projected policy gradient (PPG)-GAIL, Frank-Wolfe policy gradient (FWPG)-GAIL, trust region policy optimization (TRPO)-GAIL and natural policy gradient (NPG)-GAIL. This is the first systematic theoretical study of GAIL for global convergence.'
author:
- 'Ziwei Guan, Tengyu Xu, Yingbin Liang'
bibliography:
- 'references.bib'
title: When Will Generative Adversarial Imitation Learning Algorithms Attain Global Convergence
---
Conclusion
==========
In this paper, we study four GAIL algorithms, each of which is implemented in an alternating fashion between a popular policy gradient algorithm for the policy update and a gradient ascent for the reward update. Our focus is on investigating whether incorporation of these policy gradient algorithms to the GAIL framework will still have global convergence guarantee. We show that all these GAIL algorithms converge globally as long as the objective function is properly regularized (to be strongly concave) with respect to the reward parameter. We also anticipate that the analysis tools that we develop here will benefit the future theoretical studies of similar problems including GANs, min-max optimization, and bi-level optimization algorithms.
Acknowledgments {#acknowledgments .unnumbered}
===============
The work was supported in part by the U.S. National Science Foundation under the grants CCF-1761506, CCF-1801846, and CCF-1909291.
[**Supplementary Materials**]{}
|
[**Damping signatures in future neutrino oscillation experiments** ]{}
****
Abstract
We discuss the phenomenology of damping signatures in the neutrino oscillation probabilities, where either the oscillating terms or the probabilities can be damped. This approach is a possibility for tests of non-oscillation effects in future neutrino oscillation experiments, where we mainly focus on reactor and long-baseline experiments. We extensively motivate different damping signatures due to small corrections by neutrino decoherence, neutrino decay, oscillations into sterile neutrinos, or other mechanisms, and classify these signatures according to their energy (spectral) dependencies. We demonstrate, at the example of short baseline reactor experiments, that damping can severely alter the interpretation of results, [[*e.g.*]{}]{}, it could fake a value of ${\sin^2(2 \theta_{13})}$ smaller than the one provided by Nature. In addition, we demonstrate how a neutrino factory could constrain different damping models with emphasis on how these different models could be distinguished, [[*i.e.*]{}]{}, how easily the actual non-oscillation effects could be identified. We find that the damping models cluster in different categories, which can be much better distinguished from each other than models within the same cluster.
Introduction
============
Neutrino oscillations are by far the most plausible description of transitions among different neutrino flavor eigenstates [@Fukuda:1998mi; @Ahmad:2002jz; @Ahmed:2003kj; @Ahn:2002up; @Eguchi:2002dm; @Araki:2004mb; @Ashie:2004mr]. However, there have historically been other attempts in the literature to describe these transitions with other mechanisms as well as neutrino oscillations combined with such other mechanisms. These scenarios include neutrino wave packet decoherence [@Giunti:1998wq; @Giunti:2003ax; @Giunti:1992sx; @Grimus:1998uh; @Cardall:1999ze], neutrino decay [@Bahcall:1972my; @Barger:1982vd; @Valle:1983ua; @Barger:1998xk; @Pakvasa:1999ta; @Barger:1999bg; @Lindner:2001fx; @Lindner:2001th], oscillations into sterile neutrinos [@Strumia:2002fw; @Maltoni:2004ei], neutrino absorption (see, [[*e.g.*]{}]{}, [Ref.]{} [@DeRujula:1983ya]), and neutrino quantum decoherence [@Lisi:2000zt; @Benatti:2000ph; @Adler:2000vf; @Ohlsson:2000mj; @Benatti:2001fa; @Gago:2002na; @Barenboim:2004wu; @Barenboim:2004ev; @Morgan:2004vv]. A combined scenario is, for example, the combination of neutrino oscillations and neutrino decay (see, [[*e.g.*]{}]{}, [Refs.]{} [@Lindner:2001fx; @Lindner:2001th]). Although these other mechanisms, leading to “non-standard effects”, are not such successful descriptions for flavor transitions as neutrino oscillations are (in fact, they are strongly disfavored [@Ashie:2004mr; @Araki:2004mb]), they could still give rise to small corrections to the neutrino oscillations. These non-standard effects need to be described in a framework together with neutrino oscillations and can be constrained by current and future experiments (see, [[*e.g.*]{}]{}, [Ref.]{} [@Valle:2003uv] for a recent review). Thus, we will assume that the leading order effect in neutrino flavor transitions is due to neutrino oscillations, whereas the next-to-leading order effects are described by different “damping mechanisms” of the neutrino oscillations.
Since any non-standard effect may point towards new interesting physics beyond the standard model, the test of small corrections due to these effects should be one of the main objectives in future high-precision neutrino oscillation physics. The assumption of standard three-flavor neutrino oscillations will inevitably lead to an erroneous derivation of the elements of the mixing matrix $U$ or the mass squared differences. We therefore define “non-oscillation effects” as any modification of the three-flavor neutrino oscillation probabilities in vacuum as well as in matter. For example, the LSND anomaly [@Aguilar:2001ty] could be an indication of non-oscillation effects according to this definition. Since future reactor and long-baseline neutrino oscillation experiments are expected to have high-precisions to the subleading neutrino oscillation parameters ${\sin^2(2 \theta_{13})}$ and ${\delta_{\mathrm{CP}}}$, we mainly discuss the impact of non-oscillation effects or possible constraints on the non-oscillation effects in the context of these experiments.
In principle, one could think of many different approaches to test non-oscillation effects with future long-baseline experiments:
Neutral-currents
: can be used to test the conservation of probability, [[*i.e.*]{}]{}, $P_{\alpha e} + P_{\alpha \mu} + P_{\alpha \tau} =1$ (see, [[*e.g.*]{}]{}, [Ref.]{} [@Barger:2004db]). However, at long-baseline experiments, uncertainties in the neutral-current cross-sections and the charged-current contamination lead to a precision of only about $10~\%-15~\%$ [@Barger:2004db]. In addition, even if some non-oscillation effects are found, there will be no information on the nature of the effects, whereas effects conserving the overall probability cannot be detected at all.
The detection of $\boldsymbol{\nu_\tau}$
: can complement the information on $P_{\alpha e}$ and $P_{\alpha \mu}$ to test the conservation of probability (see, [[*e.g.*]{}]{}, [Ref.]{} [@Donini:2002rm]). Since $\nu_\tau$ detection is much more sophisticated and less efficient than the detection of $\nu_e$ and $\nu_\mu$ due to the higher $\tau$ production threshold, this is also a non-trivial test. If there are non-oscillation effects, then the information will be better than in the preceding case, since one will know which neutrino oscillation probabilities are affected.
Unitarity triangles
: for the lepton sector can be constructed [@Farzan:2002ct; @Zhang:2004hf]. However, since there is no simple relationship between the quantities of the unitarity triangles and the neutrino oscillation observables, this approach may not be the most feasible for the lepton sector.
Tests of distinctive signatures,
: [[*i.e.*]{}]{}, spectral (energy) dependent effects, could directly identify certain classes of non-oscillation effects [@Valle:2003uv; @Huber:2001zw; @Huber:2001de; @Huber:2002bi]. The advantage of such tests is that the effect could be directly identified if it produces a unique signature in the energy spectrum. In addition, this test does not depend upon normalization errors of the event rates, which are likely to constrain the first two measurements. However, there might be strong correlations with the neutrino oscillation parameters.
In addition, in the future, it may be possible to resolve the line width and shape of the ${}^7$Be solar neutrino line [@Bahcall:1993ej; @Bahcall:1994cf] and extract the temperature distribution as well as the modulation of this line, which could be caused by next-to-leading order effects. Thus, performing very high-energy resolution measurements of the ${}^7$Be line may be an idea how to determine these next-to-leading order effects. Such possible precision neutrino experiments include, for example, a bromine cryogenic thermal detector proposed in [Refs.]{} [@Fiorini:1991; @Alessandrello:1995ih].
In this study, we will focus on the tests of distinctive signatures in which we introduce “damping signatures” as an abstract concept for a class of possible effects entering at probability level.[^1] In general, small Hamiltonian effects, see, [[*e.g.*]{}]{}, [Ref.]{} [@Benatti:2001fa], may be as important as the kind of damping effects that we will describe in this study. Such Hamiltonian effects could lead to direct changes in the effective neutrino oscillation parameters. Nevertheless, those effects cannot be treated in the framework presented here. We will use the observation that mechanisms, such as decoherence or decay, lead to exponential damping in the neutrino oscillation probabilities. However, the effect might be stronger for low or high energies, [[*i.e.*]{}]{}, the spectral (energy) dependence for the damping might be different. A common feature for many of the discussed models is that they will lead to less neutrinos (of all active flavors) being detected than what is expected from the three-flavor neutrino oscillations. For all other models, only the oscillating terms of the neutrino oscillation probabilities will be damped, while the total number of active neutrinos remains constant. Note that the damping signature approach does not cover all possible models, but many models can, at least in the limit of small corrections, lead to some exponential damping effect.
Our study is organized as follows. In [Sec.]{} \[sec:phenomenology\], we will present and classify different forms of the damping signatures. For the reader who is not interested in different models for damping signatures, at least [Sec.]{} \[sec:gendescription\] and the examples in [Table]{} \[tab:models\] should be read to be able to follow the rest of the study. Next, in [Sec.]{} \[sec:dampedprob\], we will give and discuss the damped neutrino oscillation probabilities arising from the effects described by their signatures. For the reader, who is most interested in possible experimental implications, [Sec.]{} \[sec:dampedtwoflavor\] should summarize the most relevant features, whereas the rest of this section deals with the more technical three-flavor cases. Then, in [Secs.]{} \[sec:appl1\] and \[sec:appl2\], we will discuss the physics of these damping signatures and give two different applications in the framework of a complete experiment simulation. Especially, in [Sec.]{} \[sec:appl1\], we demonstrate how such damping signatures can modify the interpretation of physical results for future reactor experiments, whereas in [Sec.]{} \[sec:appl2\], we discuss how a neutrino factory could constrain different damping signatures and how these different signatures could be distinguished. Finally, in [Sec.]{} \[sec:summary\], we will summarize our work and present our conclusions.
Phenomenology of damping signatures {#sec:phenomenology}
===================================
In this section, we motivate, in a phenomenological manner, the form of the damping signatures used for the rest of this study.
General description of damped neutrino oscillations in vacuum {#sec:gendescription}
-------------------------------------------------------------
We start with three-flavor neutrino oscillations in vacuum, which can be described by the (undamped) vacuum oscillation probabilities $$\begin{aligned}
P_{\alpha \beta} \equiv P(\nu_\alpha \rightarrow \nu_\beta) & = & \left| \langle \nu_\beta |
U \, \operatorname{diag}\left( 1 , \exp \left( -{\rm i}\frac{{\Delta m_{21}^2}L}{2 E} \right),
\exp \left( -{\rm i}\frac{{\Delta m_{31}^2}L}{2 E} \right)
\right) \, U^\dagger | \nu_\alpha \rangle \right|^2 \nonumber \\ & = &
\sum\limits_{i,j=1}^{3} U_{\alpha j} \, U_{\beta j}^* \, U_{\alpha
i}^* \, U_{\beta i} \, \exp(- {\rm i} \Phi_{ij} ).\end{aligned}$$ Here $U$ is the leptonic mixing matrix in vacuum, $\Delta m_{ij}^2
\equiv m_i^2 - m_j^2$ the mass squared difference, and $\Phi_{ij}
\equiv \Delta m_{ij}^2 L/(2 E)$ the oscillation phase. By defining $$J_{ij}^{\alpha\beta} \equiv U_{\alpha j}
U_{\beta j}^* U_{\alpha i}^* U_{\beta i}
\quad {\rm and} \quad \Delta_{ij} \equiv \frac{\Delta
m_{ij}^2L}{4E} \equiv \frac{m_i^2-m_j^2}{4E}L = \frac{\Phi_{ij}}2,$$ the oscillation probabilities may be written as $$\begin{aligned}
P_{\alpha\beta} &=&
\sum_{i,j = 1}^3 \operatorname{Re}(J_{ij}^{\alpha\beta}) -
4 \sum_{1\leq i<j \leq 3} \operatorname{Re}(J_{ij}^{\alpha\beta})\sin^2 (\Delta_{ij}) -
2 \sum_{1\leq i<j \leq 3} \operatorname{Im}(J_{ij}^{\alpha\beta})\sin (2\Delta_{ij})
\nonumber \\
\label{equ:vacprob}
&=&\sum_{i=1}^3 J_{ii}^{\alpha\beta} +
2 \sum_{1\leq i < j \leq 3}
|J_{ij}^{\alpha\beta}| \cos(2\Delta_{ij}+\arg J_{ij}^{\alpha\beta}),\end{aligned}$$ where, in the first line of the equation, the first two terms are [*CP*]{}-conserving and the third term is the source of any [*CP*]{} violation, this corresponds to $\arg J_{ij}^{\alpha\beta}$ being the source of any [*CP*]{} violation in the second line. As will be discussed, there may be reasons to assume that [[Eq.]{} (\[equ:vacprob\])]{} does not give the correct neutrino oscillation probabilities. Effects that might spoil this approach of calculating neutrino oscillations probabilities include loss of wave packet coherence and neutrino decay. The effective result of such processes is to introduce damping factors to the oscillating terms of the neutrino oscillation probabilities. We define a general damping effect to be an effect that alters the neutrino oscillation probabilities to the form $$\begin{aligned}
P_{\alpha\beta} &=& \sum\limits_{i,j=1}^{3}
U_{\alpha j} \, U_{\beta j}^* \, U_{\alpha
i}^* \, U_{\beta i} \, \exp(- {\rm i} \Phi_{ij} ) D_{ij} \nonumber \\
&=&
\sum_{i,j = 1}^3 \operatorname{Re}(J_{ij}^{\alpha\beta})D_{ij} -
4 \sum_{1\leq i<j \leq 3} \operatorname{Re}(J_{ij}^{\alpha\beta})D_{ij}\sin^2 (\Delta_{ij}) -
2 \sum_{1\leq i<j \leq 3} \operatorname{Im}(J_{ij}^{\alpha\beta})D_{ij}\sin (2\Delta_{ij})
\nonumber \\
&=&\sum_{i=1}^3 J_{ii}^{\alpha\beta} D_{ii} +
2 \sum_{1\leq i < j \leq 3}
|J_{ij}^{\alpha\beta}| D_{ij}\cos(2\Delta_{ij}+\arg J_{ij}^{\alpha\beta}),
\label{equ:damping}\end{aligned}$$ where the damping factors $$\label{equ:dfactor}
D_{ij} = \exp\left(-\alpha_{ij}\frac{|\Delta m_{ij}^2|^\xi L^\beta}
{E^\gamma}\right)$$ have been introduced and we have assumed that $D_{ij} =
D_{ji}$. Obviously, as $D_{ij} \rightarrow 1$, we regain the undamped oscillation probabilities given in [[Eq.]{} (\[equ:vacprob\])]{}. In [[Eq.]{} (\[equ:dfactor\])]{}, $\alpha_{ij} \ge 0$ is a non-negative damping coefficient matrix, and $\beta$, $\gamma$, and $\xi$ are numbers that describe the “signature”, [[*i.e.*]{}]{}, the $L$ ($\beta$) and $E$ ($\gamma$) dependencies as well as the dependence on the mass squared differences. In addition, the parameter $\xi$ implies two interesting cases:
$\boldsymbol{\xi>0}$:
: In this case, only the oscillating terms will be damped, since $\Delta m_{ii}^2 = 0$ by definition.
$\boldsymbol{\xi=0}$:
: The whole oscillation probability can be damped (depending on $\alpha_{ij}$), since also the terms which are independent of the oscillation phases are affected.
Therefore, we expect two completely different results for these two cases. In general, [[Eq.]{} (\[equ:dfactor\])]{} introduces twelve new parameters, which can be used to model many non-standard contributions that enter on the oscillation probability (not Hamiltonian) level. We will give some examples of such contributions below. Although we expect these contributions to be small, it is rather impractical to deal with that many new parameters, which means that some simplifications need to be made. First of all, note that the parameter $\beta$ is not measurable if only one baseline is considered and can therefore be absorbed in $\alpha_{ij}$. For two baselines, it can, in principle, be resolved if all the other parameters are known. Second, for a specific model, there may be relations among different $\alpha_{ij}$’s that actually imply much fewer independent parameters. For a very simple model, the number of parameters can even reduce to one. Since we are mainly interested in the spectral signatures, [[*i.e.*]{}]{}, $\gamma$, we will often use $\alpha_{ij} \equiv \alpha$ to estimate the magnitude of different effects. Third, it will turn out that the parameter $\xi$ is strongly dependent on the model, since, as discussed above, it describes two completely different classes of models. Hence, we will finally end up with one free parameter $\alpha$ and several fixed model dependent parameters $\beta$, $\gamma$, and $\xi$.
A model for damped neutrino oscillations in matter
--------------------------------------------------
In some cases, we will use neutrino propagation in matter, since, for instance, neutrino factories operate at very long-baselines for which matter effects become important. We use an approach similar to [[Eq.]{} (\[equ:damping\])]{}, which should describe the damping signatures as minor perturbations to neutrino oscillations in (constant) matter as long as they are small enough: $$P_{\alpha\beta} = \sum_{i,j = 1}^3 \operatorname{Re}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij} -
4 \sum_{1\leq i<j \leq 3}
\operatorname{Re}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij}\sin^2 (\tilde \Delta_{ij}) -
2 \sum_{1\leq i<j \leq 3}
\operatorname{Im}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij}\sin (2\tilde\Delta_{ij}),
\label{equ:dampingmatter}$$ where the tildes denote the effective parameters for neutrinos propagating in matter (for instance, $\tilde J_{ij}^{\alpha\beta} =
\tilde{U}_{\alpha j} \, \tilde{U}_{\beta j}^* \, \tilde{U}_{\alpha
i}^* \, \tilde{U}_{\beta i}$, where $\tilde U$ is the effective leptonic mixing matrix in matter, [[*i.e.*]{}]{}, the matrix re-diagonalizing the Hamiltonian with the matter potential included). In general, the damping effects may not enter directly as multiplicative factors in the interference terms among different matter eigenstates.[^2] However, in this study, we assume small damping effects that should act as perturbations which, to leading order, give rise to neutrino oscillation probabilities in matter of the same form as the ones in vacuum.
Thus, we use the propagation in constant matter and apply the damping signatures to the mass eigenstates in matter. This means that we discuss signatures which depend on the mass eigenstates in matter. They may come from wave packet decoherence, neutrino decay, neutrino oscillations into sterile neutrinos, neutrino absorption, quantum decoherence, or other mechanisms. Strictly speaking, this model does not describe many of these mechanisms exactly, since a complete re-diagonalization of the Hamiltonian might be necessary (such as for Majoron decay in matter; see, [[*e.g.*]{}]{}, [Refs.]{} [@Berezhiani:1987gf; @Giunti:1992sy]). However, we treat only small effects in matter acting as a perturbation to the neutrino oscillation mechanism and do not consider transitions from active into active neutrinos, which would require a more complicated treatment (such as decay into other active neutrino states). Therefore, this model should be sufficient as a first approximation, since we will later on use either short baselines or mainly discuss effects in the $P_{\mu \mu}$ channel, which are not affected by matter effects to first order in the ratio of the mass squared differences $\Delta
m_{21}^2/\Delta m_{31}^2$ and the mixing parameter $s_{13} \equiv
\sin(\theta_{13})$ [@Akhmedov:2004ny].
Examples of different damping signatures {#sec:examples}
----------------------------------------
[p[3.0cm]{}ccccc]{}
--------------
Damping type
--------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& Signature $D_{ij}$ & Unit for $\alpha$ & $\beta$ & $\gamma$ & $\xi$\
-------------
Wave packet
decoherence
-------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $\exp \left( - \sigma_E^2 \frac{(\Delta m_{ij}^2)^2 L^2}{8E^4} \right)$ & $\mathrm{MeV}^2$ or $\mathrm{GeV}^2$ & 2 & 4 & 2\
-------
Decay
-------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $\exp \left( - \alpha \frac{L}{E} \right)$ & $\mathrm{GeV \cdot km^{-1}}$ & 1 & 1 & 0\
-------------------------
Oscillations to $\nu_s$
-------------------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $ \exp \left( - \epsilon \frac{L^2}{(2E)^2} \right)$ & $\mathrm{eV}^4$ & 2 & 2 & 0\
------------
Absorption
------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $\exp \left( - \alpha L E \right)$ & $\mathrm{GeV}^{-1} \cdot \mathrm{km}^{-1}$ & 1 & $-1$ & 0\
---------------
Quantum
decoherence I
---------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $\exp \left( - \alpha L E^2 \right)$ & $\mathrm{GeV}^{-2} \cdot \mathrm{km}^{-1}$ & 1 & $-2$ & 0\
----------------
Quantum
decoherence II
----------------
: [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{}
& $\exp \left( - \kappa \frac{(\Delta m_{ij}^2)^2}{E^2} \right)$ & $\mathrm{eV}^{-2} $ & 1 or 2 & 2 & 2\
The general damping signature in [[Eq.]{} (\[equ:dfactor\])]{} seems to be very abstract. Therefore, let us now give some motivations for such damping signatures by different mechanisms, which are summarized in [Table]{} \[tab:models\].
### Intrinsic wave packet decoherence {#intrinsic-wave-packet-decoherence .unnumbered}
Intrinsic wave packet decoherence is an effect that appears even in standard neutrino oscillation treatments [@Giunti:1998wq; @Giunti:2003ax; @Giunti:1992sx; @Grimus:1998uh; @Cardall:1999ze]. It naturally emerges from any quantum mechanical model that does not assume neutrino mass eigenstates propagating as plane waves or from any quantum field theoretical treatment. In principle, intrinsic decoherence may not be distinguishable from a macroscopic energy averaging (see, [[*e.g.*]{}]{}, discussions in [Refs.]{} [@Kiers:1996zj; @Giunti:2003mv; @Lipkin:2003st]). Therefore, it is natural to expect that the test of this signature could be limited by the knowledge on the energy resolution of the detector.
We adopt the treatment in [Ref.]{} [@Giunti:1998wq], which uses averaging over Gaussian wave packets. In this approach, the loss of coherence can only be described at probability level. It leads to factors $\exp \left[-(L/L_{ij}^{\mathrm{coh}})^2 \right]$ in [[Eq.]{} (\[equ:dfactor\])]{}, where $L_{ij}^{\mathrm{coh}} = 4 \sqrt{2} \sigma_x
E^2/|\Delta m_{ij}^2|$ and $\sigma_x$ is the spatial wave packet width. In this case, the damping descriptions in vacuum and matter using [Eqs.]{} (\[equ:damping\]), (\[equ:dfactor\]), and (\[equ:dampingmatter\]) are accurate. For the damping signature, we obtain $$D_{ij} = \exp \left[ - \left( \frac{L}{L_{ij}^{\mathrm{coh}}}
\right)^2 \right] =
\exp \left[ - \left( \frac{\sqrt{2}\sigma_E}{E} \frac{\Delta m_{ij}^2 L}{4 E} \right)^2 \right] =
\exp \left( - \sigma_E^2 \frac{(\Delta m_{ij}^2)^2 L^2}{8 E^4} \right)
\label{equ:coherence}$$ in vacuum and the analogous signature $\tilde{D}$ in matter. Here we have introduced a wave packet spread in energy $\sigma_E \equiv 1/(2
\sigma_x)$, since we later will derive an upper bound for this quantity and directly compare it to the energy resolution of a detector. The typical units of $\sigma_E$ will be $\mathrm{MeV}$ or $\mathrm{GeV}$. By comparing [Eqs.]{} (\[equ:dfactor\]) and (\[equ:coherence\]), we can identify $\alpha_{ij} = \sigma_E^2/8$, $\beta=2$, $\gamma=4$, and $\xi=2$. Note that, in this case, the $\alpha_{ij}$’s do not depend on the indices $i$ and $j$.
In order to better understand [[Eq.]{} (\[equ:coherence\])]{}, we note that $\Delta m_{ij}^2 L/(4 E)$ is of order unity for the first oscillation maximum: $$D_{ij} = \exp \left[ - \left( \frac{\sqrt{2}\sigma_E}{E} \frac{\Delta m_{ij}^2 L}{4 E} \right)^2 \right] = \exp \left[ - \left( \frac{\sigma_E}{\sqrt{2} E} \Phi_{ij} \right)^2 \right] \simeq
\underbrace{
\exp \left[ - \left( \frac{1}{\sqrt{2}\sigma_xE} \mathcal{O}(1)
\right)^2 \right]}_{\mathrm{value \, at \, oscillation \, maximum}} \, .
\label{equ:coherence2}$$ From [[Eq.]{} (\[equ:coherence2\])]{}, we find three major implications: First, it means that no effect will be observed if $\sigma_E \ll E$, because the oscillation phase is usually of order unity (or less). Second, since the decoherence damping factor always comes together with an oscillation phase factor with the same $\Delta m_{ij}^2$ \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:damping\])]{}\], it will equally damp the solar and atmospheric oscillating terms in one probability formula. This means for the atmospheric oscillation experiments that if the solar contribution cannot be neglected, its damping factor can also not be neglected. Third, one expects the largest suppression for low energies independent of the type of oscillation experiment (solar or atmospheric), since in either case the experiment will be operated close to the oscillation maximum. Eventually, it is important to keep in mind that this decoherence signature is not an intrinsic property of the neutrinos, but an effect related to the production and detection processes. Therefore, the parameter $\sigma_E$ could be different for different classes of experiments.
### Invisible neutrino decay {#invisible-neutrino-decay .unnumbered}
Another example of a damping signature is neutrino decay (see, [[*e.g.*]{}]{}, [Refs.]{} [@Bahcall:1972my; @Barger:1982vd; @Valle:1983ua; @Barger:1998xk; @Pakvasa:1999ta; @Barger:1999bg]). In particular, invisible decay, [[*i.e.*]{}]{}, decay into particles invisible for the detector, leads to a loss of three-flavor unitarity. In this case, the neutrino evolution is given by an effective Hamiltonian $$H_{\rm eff} = H - {\rm i}\Gamma,$$ where $\Gamma \equiv \operatorname{diag}(a_1,a_2,a_3)/2$ in the neutrino mass eigenstate basis, $a_i \equiv \Gamma_i/\gamma_i$, $\Gamma_i$ is the inverse life-time of a neutrino of mass eigenstate $i$ in its own rest frame, and $\gamma_i \equiv E/m_i$ is the time dilation factor. We note that $H$ and $\Gamma$ are both diagonal in the neutrino mass eigenstate basis. The neutrino oscillation probabilities may now be calculated as usual with the exception that, in addition to the phase factor $\exp[-{\rm i}m_i^2L/(2E)]$, a factor of $\exp[-\Gamma_i m_i L/(2 E)]$ is obtained when evolving the neutrino mass eigenstate $\nu_i$. The resulting neutrino oscillation probabilities are of the form of [[Eq.]{} (\[equ:damping\])]{} with $$\label{equ:decay}
D_{ij} = \exp\left(-\frac{\alpha_i + \alpha_j}{2E} L\right),$$ where $\alpha_i = \Gamma_i m_i$, in accordance with [Refs.]{} [@Lindner:2001fx; @Lindner:2001th]. Thus, for neutrino decay, the characteristic signature is $\alpha_{ij} = (\alpha_i +
\alpha_j)/2$, $\beta = \gamma = 1$, and $\xi = 0$.
An example of the above decay is Majoron decay into lighter sterile neutrinos. In this case, it is plausible to assume a quasi-degenerate neutrino mass scheme for the active neutrinos with approximately equal decay rates for all mass eigenstates, since the decay products all have to be considerably lighter than the active neutrinos to obtain fast decay rates due to phase space. The decay rates of the $\alpha_i$’s will then be approximately equal ($\alpha_i = \alpha$ for all $i$) and will typically be given in units of $\mathrm{GeV}/\mathrm{km}$. Note that the decay rate is an intrinsic neutrino property, not an experiment-dependent quantity such as the wave packet decoherence. We identify by the comparison of [[Eq.]{} (\[equ:decay\])]{} with [[Eq.]{} (\[equ:dfactor\])]{} that $\alpha$ is the same quantity[^3], $\beta=\gamma=1$, and $\xi=0$. In matter, we use the analogous signature, [[*i.e.*]{}]{}, we let the mass eigenstates in matter decay. In general, this is only a first approximation, since, for example for Majoron decay in matter, a re-diagonalization of the complete Hamiltonian may be necessary; see, [[*e.g.*]{}]{}, [Refs.]{} [@Berezhiani:1987gf; @Giunti:1992sy]. However, as we have assumed equal decay rates for all eigenstates, it should describe the problem exactly, since the mass eigenstates in matter will also decay with equal rates. In different decay models, the $\alpha_{ij}$’s may not be identical anymore. For example, for a hierarchical mass scheme with a normal hierarchy, the mass eigenstate $m_3$ decays much faster than the other two. In this case, the observed effects in atmospheric oscillations would qualitatively be similar, but about a factor of two smaller (since mainly $m_2$ and $m_3$ participate in the oscillation and only one of them decays). However, in matter such a model is much more difficult to treat, since it is not easy to identify the mass eigenstate in matter after the diagonalization of the Hamiltonian. This problem does not occur with equal decay rates.
### Oscillations into sterile neutrinos {#sec:oscsteriles .unnumbered}
A natural description for the LSND result [@Aguilar:2001ty] is a light sterile neutrino ([[*i.e.*]{}]{}, not a weakly interacting neutrino) that is mixing with the active neutrinos. This description is now disfavored for the LSND experiment [@Strumia:2002fw; @Maltoni:2004ei], but small admixtures of light sterile neutrinos cannot be entirely excluded. In particular for slow enough oscillations into sterile neutrinos, the oscillation signature $\sin^2 \Delta_{4i}$ with $\Delta_{ij} \equiv \Delta
m_{ij}^2 L/(4E)$ translates into damping signatures: $$1 - \epsilon \, \sin^2 \left(\frac{\Delta m_{4i}^2 L}{4E} \right)
\simeq 1 - \epsilon \left( \frac{\Delta m_{4i}^2 L}{4 E} \right)^2
\simeq \exp \left[
- \epsilon \left( \frac{\Delta m_{4i}^2 L}{4 E} \right)^2 \right] \, ,$$ where $\epsilon$ represents the magnitude of the mixing. Thus, the damping coefficient $\alpha$ will (in this case) be determined by the sizes of the mixing and the mass squared differences $\Delta
m_{4i}^2$. We use as a model in vacuum (and the same form in matter) $$D_{ij} = \exp \left( - \alpha_{ij} \frac{L^2}{(2E)^2} \right) = \exp \left( - \epsilon \frac{L^2}{(2E)^2} \right) \, ,
\label{equ:oscillations}$$ where $\epsilon$ contains the information on mixing and $\Delta m^2$ and will be given in units of $\mathrm{eV}^4$ (the mixing factor is dimensionless). Thus, we identify by comparison of [[Eq.]{} (\[equ:oscillations\])]{} with [[Eq.]{} (\[equ:dfactor\])]{} that $\alpha_{ij} = \epsilon/4$, $\beta=\gamma=2$, and $\xi=0$. Note that we only discuss effects independent of $i$ and $j$, which simplifies the problem, but restricts the number of applications tremendously. In addition, although the coefficient $\epsilon$ is not experiment dependent (since it is an intrinsic neutrino property here), it may (partly because of the independence on $i$ and $j$) depend on the oscillation channel and mass scheme. As an example, let us consider $P_{\mu\mu}$ and a mass scheme with $\Delta
m_{21}^2 \ll \Delta m_{43}^2 < \Delta m_{31}^2$, [[*i.e.*]{}]{}, ${\Delta m_{31}^2}$ is the largest mass squared difference. In this case, one can show that to first approximation $\epsilon \simeq U_{\mu 4}^2 \, U_{\mu 3}^2 (
\Delta m_{43}^2)^2$ (for [*CP*]{} conservation). Thus, $\epsilon$ is suppressed by the flavor content of $\nu_4$ in $\nu_\mu$ and the extra mass squared difference, since all the other mass squared differences with the sterile state are absorbed into the atmospheric oscillation terms. In general, it should be noted that sterile neutrinos are not affected in the same way as active neutrinos when propagating through matter ([[*i.e.*]{}]{}, there is a phase difference due to the neutral-current interactions between matter and the active neutrino flavors). However, the exponential damping signature for oscillations into sterile neutrinos presented here is only valid for short baselines, where matter effects have not yet developed.
### Neutrino absorption {#neutrino-absorption .unnumbered}
When neutrinos propagate through matter, there is a small chance of absorption. Neutrino absorption can be described in a fashion similar to neutrino decay. In this case, we assume that an effective Hamiltonian is given by $$H_{\rm eff} = H - {\rm i}\Gamma,$$ where $H$ is the usual neutrino Hamiltonian in matter, $\Gamma$ is given by $$\Gamma = \rho \operatorname{diag}(\sigma_e, \sigma_\mu, \sigma_\tau)/2$$ in the flavor eigenstate basis, $\rho$ is the matter density, and $\sigma_\alpha$ is the absorption cross-section for a neutrino of flavor $\alpha$. If we assume the cross-sections to be relatively small, then the eigenstates of $H_{\rm eff}$ will not differ significantly from the orthogonal eigenstates of $H$. Thus, the first order corrections to the eigenvalues of the effective Hamiltonian will be $$\delta E_i^{(1)} = -{\rm i} \Gamma_{ii} =
-{\rm i} \frac \rho 2 \sum_\alpha |U_{\alpha i}|^2 \sigma_\alpha
\equiv
-{\rm i}\frac \rho 2 \sigma_i,$$ where $\sigma_i$ is an effective cross-section for a neutrino of mass eigenstate $i$. The neutrino oscillation probability is now given by an expression of the form of [[Eq.]{} (\[equ:damping\])]{} with $$D_{ij} = \exp\left(
-\frac{\sigma_i + \sigma_j}{2} \rho L
\right)= \exp\left(
-\frac{\sigma_i(E) + \sigma_j(E)}{2} \rho L
\right) \, ,$$ where we have assumed a constant matter density $\rho$. The signature of this scenario is given by $\beta = 1$ and $\gamma$ is equal to minus the power of the energy dependence of the cross-sections. It should be observed that, since the cross-sections increase with energy, $\gamma$ will be a negative number.
If all neutrino flavor cross-sections were equal (or approximately equal), then the effective matter eigenstate cross-sections would also be equal.[^4] For the neutrino energies relevant to a neutrino factory, the neutrino-nucleon cross-sections are approximately linear in energy [@Gandhi:1998ri]. Thus, in this energy range, the damping signature is given by $\alpha = \rho \sigma(E_0)/E_0$, $\beta = 1$, $\gamma = -1$, and $\xi = 0$, where $\sigma(E_0)$ is the cross-section at energy $E_0$. At higher energies, the cross-sections increase at a slower rate and if damping effects are studied at these energies, then the effective damping parameter $\gamma$ lies in the interval $-1 <
\gamma < 0$.
It should be noted that the standard neutrino absorption effects (by weak interactions) are very small for energies typical for neutrino oscillation experiments. However, there could be non-standard absorption effects and the cross-sections of these effects should behave in a manner similar to the standard absorption.
### Quantum decoherence {#quantum-decoherence .unnumbered}
It has been argued that quantum decoherence could be an alternative description of neutrino flavor transitions. Fits to data by different collaborations ([[*e.g.*]{}]{}, Super-Kamiokande [@Ashie:2004mr] and KamLAND [@Araki:2004mb]) have been performed and these clearly disfavor a decoherence explanation for neutrino flavor transitions. However, quantum decoherence may still be a marginal effect in addition to neutrino oscillations and could give rise to damping factors of the type given in [[Eq.]{} (\[equ:dfactor\])]{}.
Quantum decoherence arises when a neutrino system is coupled to an environment (or a reservoir or a bath), which could consist of, for example, a space-time “foam” [@Lisi:2000zt] leading to new physics beyond the standard model. Thus, quantum decoherence may be a feature of quantum gravity. In order to find the formulas describing quantum decoherence, it is necessary to use the Liouville equation with decoherence effects of the Lindblad form [@Lindblad:1975ef].
Throughout the literature [@Lisi:2000zt; @Benatti:2000ph; @Adler:2000vf; @Gago:2000qc; @Gago:2000nv; @Ohlsson:2000mj; @Benatti:2001fa; @Gago:2002na; @Barenboim:2004wu; @Barenboim:2004ev; @Morgan:2004vv], the effects of loss of quantum coherence in neutrino oscillations have been studied. Although the signatures derived by different authors seem to vary, the decoherence effects are of the same form as [[Eq.]{} (\[equ:dfactor\])]{}. However, there might be additional effects on the oscillation phases. In [Table]{} \[tab:qdecoherence\], we give a brief summary of some of the signatures that are present in the literature, these examples could be used to motivate the numerical testing of such signatures.
Reference Signature $D_{ij}$ Unit for $\alpha$ $\beta$ $\gamma$ $\xi$
--------------------------------------------------------------------------- -------------------------------------------------------------------- -------------------------------------------- --------- ---------- -------
Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha L \right)$ $\mathrm{km}^{-1}$ 1 0 0
Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha \frac{L}{E} \right)$ $\mathrm{GeV} \cdot \mathrm{km}^{-1}$ 1 1 0
Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha L E^2 \right)$ $\mathrm{GeV}^{-2} \cdot \mathrm{km}^{-1}$ 1 $-2$ 0
Adler [@Adler:2000vf] $\exp \left( - \alpha \frac{(\Delta m_{ij}^2)^2 L}{E^2} \right)$ $\mathrm{GeV}^{-1} $ 1 2 2
Ohlsson [@Ohlsson:2000mj] $\exp \left( - \alpha \frac{(\Delta m_{ij}^2)^2 L^2}{E^2} \right)$ dimensionless 2 2 2
: [\[tab:qdecoherence\] Different signatures that might arise from quantum decoherence and the references in which they are motivated.]{}
### Other signatures {#other-signatures .unnumbered}
In principle, what we have presented above is just a collection of interesting signatures that could be responsible for damping of neutrino oscillations. However, there are also other possibilities, which we have decided not to investigate further in this study. These signatures include, for example, heavy isosinglet neutrinos [@Schechter:1980gr; @Schechter:1980gk] and neutrino oscillations in different extra dimension scenarios [@Dvali:1999cn; @Mohapatra:1999zd; @Barbieri:2000mg; @Mohapatra:2000wn; @Morgan:2004vv; @Hallgren:2004mw].
### Combined signatures {#combined-signatures .unnumbered}
In most cases, if there is a damping effect, then it would be natural (and easy) to assume that one type of effect is giving a clearly dominating contribution. However, if an experiment is carried out with some specific setup, then contributions from different scenarios might be of the same order. In such a case, the form of [[Eq.]{} (\[equ:dfactor\])]{} is spoiled. For example, in the case of neutrino decay combined with neutrino absorption, the matrices $\Gamma$ are just added which results in the damping signatures $$D_{ij} = \exp\left[
-\left(\frac{\alpha_{ij}^{\rm decay}}{E} +
\alpha_{ij}^{\rm abs}E\right)L
\right].$$ In general, just multiplying the damping factors (which is the result of the above treatment) might not give the correct damping and different combined cases might behave in other ways. However, since there are different energy dependencies in the different damping signatures, there will only be a limited energy range where a combined treatment is necessary. In this study, we do not consider combined signatures.
Damped neutrino oscillation probabilities {#sec:dampedprob}
=========================================
In this section, we investigate the effects of damping on specific neutrino oscillation probabilities interesting for future reactor and long-baseline experiments, where we restrict the analytical discussion to the vacuum case.
The damped two-flavor neutrino scenario {#sec:dampedtwoflavor}
---------------------------------------
In a simple two-flavor scenario, the damped neutrino oscillation probabilities take particularly simple forms (just as in the non-damped case). From the two-flavor equivalent of [[Eq.]{} (\[equ:damping\])]{}, we obtain $$\begin{aligned}
\label{equ:2flavs}
P_{\alpha\alpha} &=&
D_{11} c^4 + D_{22} s^4 + \frac{1}{2} D_{21} \sin^2(2\theta) \cos(2\Delta), \\
P_{\beta\beta} &=&
D_{11} s^4 + D_{22} c^4 + \frac{1}{2} D_{21} \sin^2(2\theta) \cos(2\Delta)\end{aligned}$$ for the neutrino survival probabilities and $$\label{equ:2flavo}
P_{\alpha\beta} = P_{\beta\alpha} =
\frac{1}{4} \sin^2(2\theta)[D_{11}+D_{22} - 2D_{21}\cos(2\Delta)]$$ for the neutrino transition probability, where $\nu_\alpha$ is the linear combination $\nu_\alpha = c \nu_1 + s \nu_2$, $\nu_\beta$ is the linear combination that is orthogonal to $\nu_\alpha$, $\Delta
\equiv \Delta_{21}$, $s \equiv \sin(\theta)$, $c \equiv \cos(\theta)$, and $\theta$ is the mixing angle between the two neutrino flavors.
Let us first discuss the case $\xi > 0$ or all $\alpha_{ii} = 0$, which means that all $D_{ii}$ are equal to unity. We refer to this case as “decoherence-like” (probability conserving) damping. The two-flavor formulas then become $$\label{equ:2flavd}
P_{\alpha\beta} =
\delta_{\alpha\beta} + \frac{1}{2}(1-2\delta_{\alpha\beta})\sin^2(2\theta)
[1-D\cos(2\Delta)],$$ where $D \equiv D_{21}$. Below, we will show that expressions reminding of these two-flavor formulas will be quite common in the three-flavor counterparts. In the limit $D \rightarrow 0$ (maximal damping), the oscillations are averaged out, [[*i.e.*]{}]{}, $$P_{\alpha\beta} \rightarrow
\delta_{\alpha\beta} [1 - \sin^2 (2 \theta)] + \frac{1}{2} \sin^2 (2 \theta),$$ where the factor $1/2$ is typical for an averaged $\sin^2 (x)$ term. It is also of interest to note, from the form of [[Eq.]{} (\[equ:2flavd\])]{}, that the neutrino transition probabilities can either be smaller or larger than the undamped probabilities depending on the sign of $\cos(2\Delta)$. For instance, the neutrino survival probability $$P_{\alpha\alpha} =
1 - \frac{1}{2}\sin^2 (2 \theta) [1 - D \cos(2\Delta)].
\label{equ:faketheta13}$$ is smaller than the corresponding undamped probability if $\cos(2\Delta)$ is positive and vice versa. Close to the oscillation maximum $\Delta \sim \pi/2$, the factor $\cos(2\Delta)$ will be negative, [[*i.e.*]{}]{}, the damped neutrino survival probability will be larger than the undamped probability, since the oscillations will be partially averaged out. This behavior changes as a function of the neutrino energy at points where $\cos(2\Delta)$ changes sign, [[*i.e.*]{}]{}, at $2\Delta = (n+1) \pi/2$, $n = 0,1,\hdots$. As a rule of thumb, the damping will lead to larger probabilities close to the oscillation maximum $E_{\mathrm{max}} = \Delta m^2 L/(2 \pi)$ and to smaller probabilities for $E<2 E_{\mathrm{max}}/3$ and $E>2 E_{\mathrm{max}}$. This result will be valid for any survival probability discussed in this study.
From the form of [[Eq.]{} (\[equ:2flavd\])]{}, it is apparent that if only a small range of $\Delta$’s is studied, then a damping factor may mimic an oscillation signal. The worst such case would be if the damping signature had $\gamma = 2$. This would mean that if one makes a series expansion of $\cos(2\Delta)$ and the exponential of the damping factor, then the energy dependence will be the same to lowest order in the expansion parameters, [[*i.e.*]{}]{}, we will have $$D\cos(2\Delta) = \left[1-\alpha|\Delta m^2|^\xi\frac{L^\beta}{E^2} + \ldots \right]
\left[1-\left(\frac{\Delta m^2L}{4E}\right)^2 + \ldots \right].$$ This effect is also present in a general case with any number of neutrino flavors.
Another interesting case is when $\alpha_{ij} = \alpha_i + \alpha_j$ and $\xi = 0$, which is expected for the neutrino decay and neutrino absorption scenarios. This assumption results in the fact that the damping factor $D_{ij}$ can be written as a product $$\label{equ:pbviol}
D_{ij} = A_i A_j,$$ where $A_i \equiv \exp(-\alpha_i L^\beta/E^\gamma)$ is only dependent on the $i$th mass eigenstate. Then, the neutrino oscillation probabilities are given by $$\begin{aligned}
P_{\alpha\alpha}
&=&
A^2\left[(c^2+\kappa s^2)^2 - \kappa \sin^2(2\theta)
\sin^2(\Delta)\right], \\
P_{\beta\beta}
&=&
A^2\left[(\kappa c^2+s^2)^2 - \kappa \sin^2(2\theta)
\sin^2(\Delta)\right], \\
P_{\alpha\beta}
&=&
\frac{1}{4} A^2 \sin^2(2\theta)[1+\kappa^2 - 2\kappa \cos(2\Delta)],\end{aligned}$$ where $A \equiv A_1$ and $\kappa \equiv A_2/A_1$. It is important to note that, for example, the total probability $P_{\alpha\alpha} + P_{\alpha\beta}$ is not conserved in this case, in fact, we obtain $$\label{equ:dprobtot}
P_{\alpha\alpha} + P_{\alpha\beta} =
A^2\left[c^4 + \kappa^2 s^4 +
\frac{1}{4}\sin^2(2\theta)(1+\kappa)^2\right] \leq 1,$$ where the equality holds if and only if $A = \kappa = 1$ (because of the form of the $A_i$’s, $A \leq 1$, $\kappa A \leq 1$, and that all terms in [[Eq.]{} (\[equ:dprobtot\])]{} are positive, the terms will attain their maximum value when $A = \kappa A = 1$, in which case the entire expression simplifies to one). Thus, we will introduce the term “decay-like” for effects giving rise to damping terms of the form given in [[Eq.]{} (\[equ:pbviol\])]{}.
In the case of a decay-like signature, there are two special cases which are of particular interest. First, if both mass eigenstates are affected in the same way, [[*i.e.*]{}]{}, $\kappa = 1$, then the resulting neutrino transition probabilities will reduce to the undamped standard neutrino oscillation probabilities suppressed by a factor of $A^2$. This means that all damped probabilities will be smaller than their undamped counterparts. Second, if only one of the mass eigenstates is affected, [[*i.e.*]{}]{}, $A = 1$, then the difference in the $\nu_\alpha$ survival probability compared to the undamped case will be given by $$\Delta P_{\alpha\alpha} \equiv P_{\alpha\alpha}^{\rm damped} -
P_{\alpha\alpha}^{\rm undamped} =
(\kappa-1) s^2 [(1+\kappa)s^2 + 2 c^2 \cos(2\Delta)].$$ Thus, this survival probability will actually increase if $$\label{equ:decincr}
- 2 \cos(2\Delta) > (1+\kappa)\tan^2(\theta).$$ Note that for the first part of the neutrino propagation (for $L < \pi
E/\Delta m^2$), the term $\cos(2\Delta)$ is positive, and thus, the inequality of [[Eq.]{} (\[equ:decincr\])]{} cannot be satisfied in this region, since the right-hand side is always positive. From the comparison with the discussion after [[Eq.]{} (\[equ:faketheta13\])]{}, this condition is equivalent to $E>2 E_{\mathrm{max}}$. For example, for a neutrino factory, which can be operated far away from the oscillation maximum, this implies that the relevant part of the spectrum will be suppressed by this form of damping. For the neutrino oscillation probability difference $\Delta
P_{\alpha\beta}$, we obtain $$\Delta P_{\alpha\beta} = \frac 14 \sin^2(2\theta)(\kappa-1)
[1+\kappa - 2\cos(2\Delta)],$$ that is, the damped $P_{\alpha\beta}$ is larger than the undamped $P_{\alpha\beta}$ if $$\label{equ:decincr2}
2\cos(2\Delta) > 1 + \kappa.$$ Note that if $\tan(\theta) = 1$, then [Eqs.]{} (\[equ:decincr\]) and (\[equ:decincr2\]) will have the same form except for the sign of the left-hand side.
In [Fig.]{} \[fig:illustration\], the qualitative effects of neutrino wave packet decoherence and neutrino decay on the neutrino survival probability are shown.
![[\[fig:illustration\] The qualitative effect of different damping signatures on the two-flavor neutrino survival probability as a function of the oscillation phase $\Delta$. The mixing used in this plot is maximal ($\theta = \pi/2$) and the damping parameters have been highly exaggerated. The scenario “Oscillation + decay I” corresponds to decay of both mass eigenstates with equal rates, whereas “Oscillation + decay II” corresponds to the second mass eigenstate decaying while the first mass eigenstate is stable.]{}](illustration.eps){width="10cm"}
From this figure, we clearly see how the wave packet decoherence simply corresponds to a damping of the oscillating term and the decay of all mass eigenstates corresponds to an overall damping of the undamped neutrino survival probability. For the case of only one decaying mass eigenstate, the probability converges towards the square of the content of the stable mass eigenstate in the initial neutrino flavor eigenstate.
Three-flavor electron-muon neutrino transitions
-----------------------------------------------
For a fixed neutrino oscillation channel, the damped neutrino oscillation probability [[Eq.]{} (\[equ:damping\])]{} can be written more explicitly in terms of the mixing parameters and the mass squared differences. Below, we will use the standard notation for the leptonic mixing angles, [[*i.e.*]{}]{}, $s_{ij} = \sin(\theta_{ij})$ and $c_{ij} =
\cos(\theta_{ij})$. Then, for example, the $\nu_e$ survival probability $P_{ee}$ is given by $$\begin{aligned}
P_{ee} &=&
c_{13}^4\left[
D_{11}c_{12}^4 + D_{22} s_{12}^4 +
\frac{1}{2} D_{21}\sin^2(2\theta_{12}) \cos(2\Delta_{21})\right] \nonumber \\
&&+ \frac{1}{2} {\sin^2(2 \theta_{13})}[D_{31}c_{12}^2 \cos(2\Delta_{31}) + D_{32} s_{12}^2 \cos(2\Delta_{32})]
+ D_{33} s_{13}^4,
\label{equ:Pee}\end{aligned}$$ which is dependent on all neutrino oscillation parameters except for $\theta_{23}$ and $\delta_{CP}$, while the probability $P_{e\mu}$ of oscillations into $\nu_\mu$ is given by $$\begin{aligned}
P_{e\mu} &=& \frac{1}{4} \sin^2(2\theta_{12})
c_{23}^2[(D_{11}+D_{22})-2D_{21}\cos(2\Delta_{21})] \nonumber \\ && +
\frac{1}{2}\sin(2\theta_{12})\sin(2\theta_{23})
\{c_\delta[D_{11}c_{12}^2-D_{22}s_{12}^2 - D_{21}
\cos(2\theta_{12})\cos(2\Delta_{21})] \nonumber \\ && -D_{21}s_\delta
\sin(2\Delta_{21}) +D_{32}\cos(2\Delta_{32} - \delta_{CP}) - D_{31}
\cos(2\Delta_{31} - \delta_{CP})\} \, s_{13} \nonumber \\
&&
+s_{23}^2[D_{11} c_{12}^4 + D_{22} s_{12}^4
+ D_{33} - 2D_{31} s_{12}^2 \cos(2\Delta_{31}) -
2 D_{32} c_{12}^2 \cos(2\Delta_{32})] \, s_{13}^2 \nonumber \\
&& + \frac{1}{4} \sin^2(2\theta_{12}) [2 D_{21} \cos(2\Delta_{21})-
c_{23}^2(D_{11}+D_{22})] \, s_{13}^2 + \mathcal O(s_{13}^3),
\label{equ:Pemu}\end{aligned}$$ where $s_\delta \equiv \sin(\delta_{CP})$ and $c_\delta \equiv
\cos(\delta_{CP})$. Furthermore, the $\nu_\mu$ survival probability can be computed to be of the form $$\begin{aligned}
P_{\mu\mu} &=&
\frac{1}{2} \sin^2(2\theta_{23})[D_{32}c_{12}^2 \cos(2\Delta_{32})
+D_{31}s_{12}^2 \cos(2\Delta_{31})] \nonumber \\
&&+c_{23}^4 \left[D_{11}s_{12}^4 + D_{22}c_{12}^4 + \frac{1}{2} D_{21}
\sin^2(2\theta_{12}) \cos(2\Delta_{21})\right] + D_{33}s_{23}^4
\nonumber \\
&&+ c_\delta \sin(2\theta_{12})\sin(2\theta_{23})\left\{
c_{23}^2\left[
D_{11}s_{12}^2-D_{22}c_{12}^2+
{D_{21}}\cos(2\theta_{12})\cos(2\Delta_{21})
\right]\right. \nonumber \\
&&
\left.
+ s_{23}^2 [D_{31}\cos(2\Delta_{31}) - D_{32}\cos(2\Delta_{32})]
\right\} \, s_{13}
+\mathcal O(s_{13}^2).
\label{equ:Pmumu}\end{aligned}$$ Note that the probabilities $P_{e\mu}$ and $P_{\mu\mu}$ are series expansions in $s_{13}$, whereas the probability $P_{ee}$ is valid to all orders in $s_{13}$. The reason to use these expressions rather than the exact expressions is that, unless some further assumptions are made, the formulas for $P_{e\mu}$ and $P_{\mu\mu}$ are quite cumbersome.
The probability $P_{\mu e}$ can be obtained by making the transformation $\delta_{CP} \rightarrow -\delta_{CP}$ in the probability $P_{e\mu}$, [[*i.e.*]{}]{}, $P_{\mu e} = P_{e\mu}(\delta_{CP}
\rightarrow -\delta_{CP})$. Furthermore, in vacuum, the anti-neutrino oscillation probabilities can be obtain from the neutrino oscillation probabilities through the same transformation as above. Note that this is not true for neutrinos propagating in matter.
Probabilities for decoherence-like effects in experiments
---------------------------------------------------------
For a decoherence-like damping effect, $D_{ii} = 1$ for all $i$ and the relations $$\label{equ:probconservation}
\sum_{\alpha = e,\mu,\tau} P_{\alpha\beta} = 1 \qquad {\rm and}
\qquad
\sum_{\beta = e,\mu,\tau} P_{\alpha\beta} = 1$$ are still valid despite the presence of damping factors ([[*i.e.*]{}]{}, no neutrinos are lost due to effects such as invisible decay, absorption, [[*etc.*]{}]{}). Note that, in the case of a decoherence-like damping effect, all neutrino oscillation probabilities can be constructed from $P_{ee}$, $P_{e\mu}$, and $P_{\mu\mu}$ due to the conservation of total probability given in [[Eq.]{} (\[equ:probconservation\])]{}.
It is interesting to observe what effect a decoherence-like damping could have on the neutrino oscillation probabilities for different experiments. Therefore, we will now study different kinds of neutrino oscillation experiments and make different approximations depending on the type of experiment to investigate what the main damping effects are.
### Short-baseline reactor experiments {#sec:sblreactor .unnumbered}
Short-baseline experiments, such as CHOOZ [@Apollonio:1999ae; @Apollonio:2002gd] and Double-CHOOZ [@Ardellier:2004ui], are operated at the atmospheric oscillation maximum $\Delta_{31} \simeq \Delta_{32} = \mathcal{O}(1)$ in order to be sensitive to ${\sin^2(2 \theta_{13})}$. The most interesting quantity is the $\bar{\nu}_e$ survival probability $P_{\bar{e}\bar{e}}$. For these experiments, it turns out (see [Sec.]{} \[sec:appl1\]) that it is important to keep all damping factors. As a result, the $\bar \nu_e$ survival probability is given by $$\begin{aligned}
P_{\bar{e}\bar{e}} &=&
c_{13}^4
\left\{1-\frac{1}{2}\sin^2(2\theta_{12})[1-D_{21}\cos(2\Delta_{21})]\right\}
\nonumber \\
&&
\label{equ:sblPee}
+ \frac{1}{2}\sin^2(2\theta_{13})
[D_{31}c_{12}^2 \cos(2\Delta_{31}) + D_{32} s_{12}^2 \cos(2\Delta_{32})]
+s_{13}^4.\end{aligned}$$ The most apparent feature of this equation is the term within the curly brackets, which has the form of the survival probability for a two-flavor neutrino damping scenario with $\theta = \theta_{12}$ and $\Delta = \Delta_{21}$. Therefore, even in the limit $\theta_{13}
\rightarrow 0$ \[close to the ${\sin^2(2 \theta_{13})}$ sensitivity limit\], the damping factor $D_{21}$ might be constrained by the contribution of the solar oscillation at low energies. Furthermore, in the limit $\Delta_{21}
\rightarrow 0$ (or large $\theta_{13}$), $D_{21}$ is close to unity \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:coherence2\])]{}\] and $D_{31} \simeq D_{32}$ (this could be expected if $\Delta_{21}/\Delta_{31} \rightarrow 0$), then this expression will exactly mimic the two-flavor neutrino damping scenario with $\theta = \theta_{13}$ and $\Delta = \Delta_{31} = \Delta_{32}$. Thus, depending on which small number (the ratio of the mass squared differences or $s_{13}$) is the largest, two different two-flavor neutrino scenarios are obtained as expected from the non-damped case. If $\theta_{13}$ is relatively large (compared to the ratio of the mass squared differences), then the latter two-flavor case will apply. It is then interesting to note that the damping factor $D_{31}$, the neutrino source energy spectrum, and the cross-sections all have some energy dependence, which means that they can “emphasize” certain regions in the energy spectrum which are most sensitive to damping effects. If we assume that the total impact is strongest close to the oscillation maximum, then the damping effect will be misinterpreted as a smaller value of ${\sin^2(2 \theta_{13})}$ \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:faketheta13\])]{}, which will in both cases be closer to unity\]. Therefore, as we will demonstrate, any such damping can fake a value of ${\sin^2(2 \theta_{13})}$ which is smaller than the one that is provided by Nature.
Note that, for the case of wave packet decoherence, $D_{21}$, $D_{32}$, and $D_{31}$ are not independent \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:coherence2\])]{}\], which means that any of the terms in [[Eq.]{} (\[equ:sblPee\])]{} could lead to information on the parameter $\sigma_E$.
### Long-baseline reactor experiments {#long-baseline-reactor-experiments .unnumbered}
For long-baseline reactor experiments operated at the solar oscillation maximum $\Delta_{21} = \mathcal{O}(1)$, such as the KamLAND experiment [@Eguchi:2002dm; @Araki:2004mb], the damping factors $D_{31}$ and $D_{32}$ of a decoherence-like scenario with $\xi
> 0$ are small, since the large mass squared difference makes the argument of the exponential functions in [[Eq.]{} (\[equ:dfactor\])]{} large and negative. In addition, these two damping factors are attached to neutrino oscillations associated with the large phases $\Delta_{31}$ and $\Delta_{32}$ \[see [Eqs.]{} (\[equ:Pee\])-(\[equ:Pmumu\])\], which effectively average out. As a result of these two effects, the oscillating terms involving the third mass eigenstate can be safely set to zero. After some simplifications, the $\bar{\nu}_e$ survival probability $P_{\bar{e}\bar{e}}$ is found to be $$P_{\bar{e}\bar{e}}
=
c_{13}^4
\left\{1-\frac{1}{2}\sin^2(2\theta_{12})[1-D_{21}\cos(2\Delta_{21})]\right\}
+s_{13}^4. \label{equ:lblPee}$$ This expression is clearly of the familiar form $P_{\bar{e}\bar{e}} =
c_{13}^4 P_{\bar{e}\bar{e}}^{\rm 2f} + s_{13}^4$, where $P_{\bar{e}\bar{e}}^{\rm 2f}$ is the damped two-flavor $\bar\nu_e$ survival probability with $\theta = \theta_{12}$ and $\Delta =
\Delta_{21}$, which is also obtained in the non-damped case when averaging over the fast oscillations \[[[*cf.*]{}]{} [[Eq.]{} (\[equ:sblPee\])]{}\]. For the case of wave packet decoherence, we know from [[Eq.]{} (\[equ:coherence2\])]{} that the parameter $\sigma_E$ could be constrained by either of these two equations. Since this parameter is experiment dependent, one could argue that one should obtain some limits from the KamLAND experiment, because the reactor experiments are very similar in source and detector (see, [[*e.g.*]{}]{}, [Ref.]{} [@Schwetz:2003se]). However, it should be noted that KamLAND has a rather weak precision on the corresponding $\theta_{12}$ measurement because of normalization uncertainties. Since a decoherence contribution would appear at low energies, the data set in [Ref.]{} [@Araki:2004mb] does not seem to be very restrictive for the parameter $\sigma_E$.
### Beam experiments {#beam-experiments .unnumbered}
For beam experiments, such as superbeams, beta-beams or neutrino factories, one may assume $\Delta_{21} \simeq 0$ as a first approximation if one wants to be sensitive to ${\sin^2(2 \theta_{13})}$, since, at the energies and baseline lengths involved, the low-frequency neutrino oscillations do not have enough time to evolve. In the case of $\xi
> 0$, this also implies that $D_{12} = 1$ and $D \equiv D_{32} =
D_{31}$ to a good approximation. From these assumptions, it follows that $$\begin{aligned}
P_{e\mu} &=& 2 s_{23}^2 [1 - D \cos(2\Delta)] \, s_{13}^2 + \mathcal O(s_{13}^3), \\
P_{\mu\mu} &=& 1 - \frac{1}{2}\sin^2(2\theta_{23}) [1 - D \cos(2\Delta)] +
\mathcal O(s_{13}^2),
\label{equ:pmumudamped}\end{aligned}$$ where $\Delta \equiv \Delta_{32} = \Delta_{31}$. Note that the probability $P_{e\mu}$ is correct up to $\mathcal O(s_{13}^3)$ \[as compared with [[Eq.]{} (\[equ:Pemu\])]{}, which is only valid up to $\mathcal
O(s_{13}^2)$\], this is one of the cases where the assumptions made simplifies the $s_{13}^2$ term in this probability. Both of the above equations show obvious similarities with the cases of damped two-flavor neutrino oscillations. For $P_{e\mu}$ we have an approximate two-flavor neutrino scenario with $s^2c^2 = s_{23}^2
s_{13}^2$ and $P_{\mu\mu}$ is a pure two-flavor neutrino formula with $\theta = \theta_{23}$ up to the corrections of order $s_{13}^2$. Since the disappearance channel $P_{\mu\mu}$ at a beam experiment is supposed to have extremely good statistics, $D$ will be strongly constrained by this channel. Note that the damping in $P_{\mu\mu}$ qualitatively behaves as the one in [[Eq.]{} (\[equ:faketheta13\])]{}, [[*i.e.*]{}]{}, the damped probability might be larger or smaller than the undamped probability depending on the position relative to the oscillation maximum $E_{\mathrm{max}}$.
Probabilities for decay-like effects in experiments
---------------------------------------------------
If $\xi = 0$ and $\alpha_{ii} \neq 0$, then $D_{ii} \neq 1$ and [[Eq.]{} (\[equ:probconservation\])]{} will not hold. We define any effect of this kind to be “probability violating”. As mentioned in the two-flavor neutrino discussion, a very interesting special case of the probability violating effects is the case of a decay-like effect. The neutrino oscillation probabilities for decay-like effects corresponding to the ones given for decoherence-like effects are listed below.
### Short-baseline reactor experiments {#short-baseline-reactor-experiments .unnumbered}
For the short-baseline reactor experiments, we obtain the $\bar\nu_e$ survival probability as $$\begin{aligned}
P_{\bar e\bar e} &=&
c_{13}^4 \left\{
(A_1 c_{12}^2 + A_2 s_{12}^2)^2 -
A_1 A_2 \sin^2(2\theta_{12})\sin^2(\Delta_{21})
\right\} \nonumber \\
&&
+A_3 s_{13}^2\{A_3 s_{13}^2 +
2 c_{13}^2[A_1 c_{12}^2 \cos(2\Delta_{31})+A_2s_{12}^2\cos(2\Delta_{32})]\}.\end{aligned}$$ Again, as in the case of decoherence-like damping, the expression within the curly brackets is of a two-flavor form with $\theta = \theta_{12}$ and $\Delta = \Delta_{12}$. In the limit when ${\sin^2(2 \theta_{13})}$ is large and we ignore the solar oscillations, we obtain the two-flavor neutrino scenario $$P_{\bar e\bar e} = A^2\left\{
(c_{13}^2+\kappa s_{13}^2)^2 - \kappa {\sin^2(2 \theta_{13})}\sin^2(2\Delta)
\right\}$$ only if we assume that $A_1 = A_2 = A$, where $\Delta = \Delta_{31} =
\Delta_{32}$ and $\kappa = A_3/A$.
### Long-baseline reactor experiments {#long-baseline-reactor-experiments-1 .unnumbered}
Assuming that the fast neutrino oscillations average out, the $\bar\nu_e$ survival probability is given by $$P_{\bar e\bar e} = c_{13}^4 P_{\bar e\bar e}^{\rm 2f} + A_3^2 s_{13}^4,$$ where $P_{\bar e\bar e}^{\rm 2f}$ is the two-flavor decay-like $\bar\nu_e$ survival probability with $\theta = \theta_{12}$ and $\Delta = \Delta_{21}$ \[[[*cf.*]{}]{}, [Eq.]{} (\[equ:lblPee\])\]. In this expression, the $s_{13}^4$ term is also damped, which does not apply in a decoherence-like scenario.
### Beam experiments {#beam-experiments-1 .unnumbered}
When the assumptions $\Delta_{21} \simeq 0$ and $A = A_1 = A_2$ (which could be expected in a decay scenario where $m_1 = m_2$) are made, the neutrino oscillation probabilities that are relevant for beam experiments become $$\begin{aligned}
P_{e\mu}
&=&
A^2 s_{23}^2 [1 + \kappa^2 - 2\kappa \cos(2\Delta)] \, s_{13}^2
+ \mathcal O(s_{13}^3), \\
P_{\mu\mu}
&=&
A^2\left[(c_{23}^2 + \kappa s_{23}^2)^2 -
\kappa\sin^2(2\theta_{23}) \sin^2(\Delta)\right] + \mathcal O(s_{13}^2),\end{aligned}$$ where $\kappa \equiv A_3/A$ and $\Delta \equiv \Delta_{32} =
\Delta_{31}$. These probabilities mimic decay-like two-flavor probabilities just as the corresponding decoherence-like effects mimic decoherence-like two-flavor probabilities to leading order in $s_{13}$.
Application I: Faking a small $\boldsymbol{{\sin^2(2 \theta_{13})}}$ at reactor experiments by decoherence-like effects {#sec:appl1}
=======================================================================================================================
In this section, we demonstrate the possible effects of damping at a simple example using a full numerical simulation. Let us only consider the case of intrinsic wave packet decoherence, which is very interesting from the point of view that it is a “standard” effect in any realistic neutrino oscillation treatment. However, similar effects could occur from related signatures, such as quantum decoherence. As experiments, one could, in principle, consider all classes of experiments in order to investigate decoherence signals. New reactor experiments with near and far detectors [@Minakata:2002jv; @Huber:2003pm] are candidates for “clean” measurements of ${\sin^2(2 \theta_{13})}$, [[*i.e.*]{}]{}, they are specifically designed to search for a ${\sin^2(2 \theta_{13})}$ signal. As we have discussed in [Sec.]{} \[sec:sblreactor\], an interesting decoherence-like effect at such an experiment would be a derived value of ${\sin^2(2 \theta_{13})}$ which is smaller than the value provided by Nature. In this case, the CHOOZ bound might actually be too strong and the interpretation of new reactor experiments might be wrong.
If we assume that there is an intrinsic loss of coherence, then the reactor $\bar \nu_e$ survival probability $P_{\bar{e} \bar{e}}$ will be given by [[Eq.]{} (\[equ:sblPee\])]{}. In order to illustrate the decoherence effect, we show in [[Fig.]{} \[fig:reactorprobs\]]{} $P_{\bar e \bar e}$ and the corresponding event rates for the experiment [[Reactor-I]{}]{} from [Ref.]{} [@Huber:2003pm] (full analysis range shown).
![[\[fig:reactorprobs\] The neutrino oscillation probability $P_{\bar{e} \bar{e}}$ (left) and event rates (right) for the experiment [[Reactor-I]{}]{} from [Ref.]{} [@Huber:2003pm] in the analysis range. For the simulated parameter values, we use ${\Delta m_{31}^2}= 2.5
\cdot 10^{-3} \, \mathrm{eV}^2$, ${\Delta m_{21}^2}= 8.2 \cdot 10^{-5} \,
\mathrm{eV}^2$, $\sin^2 2 \theta_{12}=0.83$, $\sin^2 2 \theta_{23}=1$, $\delta_{CP}=0$ [@Fogli:2003th; @Bahcall:2004ut; @Bandyopadhyay:2004da; @Maltoni:2004ei] and the values for ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ as given in the left plot.]{}](probsrates){width="\textwidth"}
The different curves correspond to the non-oscillatory case as well as different combinations of ${\sin^2(2 \theta_{13})}$ and $\sigma_E$. As one can observe, the two cases ${\sin^2(2 \theta_{13})}$ large and decoherence \[${\sin^2(2 \theta_{13})}=0.05$ and $\sigma_E=2 \, \mathrm{MeV}$\] and ${\sin^2(2 \theta_{13})}$ small and no decoherence \[${\sin^2(2 \theta_{13})}=0.03$ and $\sigma_E=0$\] correspond, especially in the event rate plot, very well to each other \[as compared to the other two cases of no oscillations and large ${\sin^2(2 \theta_{13})}$ only\]. This means that the decoherence effect can mimic a smaller value of ${\sin^2(2 \theta_{13})}$ than what is provided by Nature. Note that in the probability plot, there is a significant contribution from loss of coherence in the solar terms for low energies. As we will see later, this contribution can limit the decoherence effects even for no ${\sin^2(2 \theta_{13})}$ signal. In addition, the damped neutrino oscillation probability is larger than the undamped one in the range discussed after [[Eq.]{} (\[equ:faketheta13\])]{}, where the oscillation maximum is here at about $E_{\mathrm{max}} \simeq 3.4 \, \mathrm{MeV}$.
![[\[fig:reactorcorr\] Simultaneous sensitivity to ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ for the experiments [[Reactor-I]{}]{} (left) and [[Reactor-II]{}]{} (right) from [Ref.]{} [@Huber:2003pm] (curves shown for 1 d.o.f). For the simulated parameter values, we use ${\sin^2(2 \theta_{13})}=0$, $\sigma_E=0$, and the other values as in [[Fig.]{} \[fig:reactorprobs\]]{}. For the thick solid curves, the unshown fit parameter values are marginalized over, where post-KamLAND external precisions of 10 $\theta_{12}$ [@Gonzalez-Garcia:2001zy; @Barger:2000hy] are imposed along with an external error of 10 superbeams is assumed. For the thin dashed curves, the unshown fit parameter values are fixed (no correlations). For the numerical analysis, an extended version of the GLoBES software [@Huber:2004ka] is used. The arrows indicate the shift of the ${\sin^2(2 \theta_{13})}$ sensitivity limit if one assumes $\sigma_E$ as a free parameter.]{}](reactorcorr){width="\textwidth"}
In order to illustrate the effect for a complete analysis, we show in [[Fig.]{} \[fig:reactorcorr\]]{} the simultaneous sensitivity to ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ for [[Reactor-I]{}]{} ($\mathcal{L} = 400 \, \mathrm{t \, GW \,
yr}$) and [[Reactor-II]{}]{} ($\mathcal{L} = 8 \, 000 \, \mathrm{t \, GW \,
yr}$) from [Ref.]{} [@Huber:2003pm] (1 d.o.f.) using an extended version of the GLoBES software [@Huber:2004ka]. In this figure, $\sigma_E$ is assumed to be a free (fit) parameter that has to be measured by the experiment. Therefore, without additional knowledge, the ${\sin^2(2 \theta_{13})}$ sensitivity limit is obtained as a projection of the curves onto the ${\sin^2(2 \theta_{13})}$-axis. Since the ${\sin^2(2 \theta_{13})}$ sensitivity limit for no decoherence effects is the one for $\sigma_E=0$, the arrows indicate the shift of this limit by the unknown $\sigma_E$. This means, for example, that the sensitivity limit becomes about 50 % to 100 % worse than that for the actual $\sigma_E \equiv 0$, since the decoherence mimics a smaller value of ${\sin^2(2 \theta_{13})}$ than what is provided by Nature. Similar results to the left plot are obtained for the proposed Double-CHOOZ experiment [@Ardellier:2004ui]. Note that the correlation between ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ affects the ${\sin^2(2 \theta_{13})}$ sensitivity (projection onto the horizontal axis), but not the $\sigma_E$ sensitivity (projection onto the vertical axis). The latter is correlated with the other neutrino oscillation parameters (especially the solar parameters), as one can read off from the difference between the solid and dashed curves. For the $\sigma_E$ sensitivity, one obtains $\sigma_E \lesssim 10 \, \mathrm{MeV}$ ([[Reactor-I]{}]{}) and $\sigma_E \lesssim 5 \, \mathrm{MeV}$ ([[Reactor-II]{}]{}) at the $3 \sigma$ confidence level. As one can observe from the left plot of [[Fig.]{} \[fig:reactorprobs\]]{}, there is some contribution of the solar oscillation averaging to the decoherence effect at low energies. In fact, this is the reason why one can constrain $\sigma_E$ even for ${\sin^2(2 \theta_{13})}\equiv 0$, since in the decoherence effect, the atmospheric oscillations are suppressed by the oscillation amplitude ${\sin^2(2 \theta_{13})}$. Obviously, this solar decoherence effect determines the upper bound for $\sigma_E$, which means that the $\sigma_E$ sensitivity is limited by the knowledge on the solar oscillation parameters \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:sblPee\])]{}\].
As we have discussed in [Sec.]{} \[sec:phenomenology\], $\sigma_E$ might be an experiment dependent parameter related to the production and detection processes. Instead of deriving bounds for this parameter from reactor experiments, one can estimate from [[Fig.]{} \[fig:reactorcorr\]]{} that one has to constrain $\sigma_E$ better than to about $\sigma_E
\lesssim 0.5 \, \mathrm{MeV}$ in order not to have a significant deterioration of the ${\sin^2(2 \theta_{13})}$ sensitivity limit. In addition, in order to exclude an experiment dependent effect, it is highly recommendable to measure the same quantity with different techniques such as ${\sin^2(2 \theta_{13})}$ with both reactor experiments and superbeams.
Application II: Testing and disentangling damping signatures at neutrino factories {#sec:appl2}
==================================================================================
If we want to constrain the model parameters in [Table]{} \[tab:models\] and to test the different models against each other, then we will need to choose a high-precision instrument to test these tiny effects. Therefore, we investigate the potential of a neutrino factory. In particular, the muon neutrino disappearance channel $\nu_\mu
\rightarrow \nu_\mu$ at a neutrino factory has very good statistics and the impact of neutrino oscillation parameter correlations other than with ${\Delta m_{31}^2}$ and $\theta_{23}$ is very small. Thus, we will mainly focus on this disappearance channel, but include the appearance information in the full analysis and demonstrate how the value of ${\sin^2(2 \theta_{13})}$ would influence the effects. Since our exponential damping model is not directly comparable to other approaches in the literature, we put a major emphasis on the identification problem of a non-standard contribution: If we actually observe something unexpected, how well can we determine what sort of effect this actually is? In the simplest case, this means that we test a signature against the standard (no damping) scenario giving us limits for the parameters. Since it is almost impossible to include the correlations among all parameters, we choose to use $\alpha_{ij}=\alpha$ independent of $i$ and $j$ in this section in order to drastically reduce the number of parameters. This means that we now have to deal with eight correlated parameters (six neutrino oscillation parameters, the matter density, and the parameter $\alpha$). We have motivated this choice at the end of [Sec.]{} \[sec:gendescription\] and, for individual cases, in [Sec.]{} \[sec:examples\].
![[\[fig:allprobs\] Contributions of the first three different damping signatures from [Table]{} \[tab:models\] to the disappearance probability $P_{\mu \mu}$ as function of the neutrino energy. Here $L=3 \, 000 \, \mathrm{km}$ and the neutrino oscillation parameters as in [[Fig.]{} \[fig:reactorprobs\]]{} with ${\sin^2(2 \theta_{13})}=0$ are used. The parameters for the non-standard effects are given in the plots, where zero corresponds to the thick curves (oscillations only) and larger values correspond to curves further off the zero curve. The energy range corresponds to the analysis range of the $50 \, \mathrm{GeV}$ neutrino factory [[NuFact-II]{}]{} from [Ref.]{} [@Huber:2002mx].]{}](allprobs){width="\textwidth"}
Before we come to the results of a complete simulation, let us illustrate the spectral behavior (energy dependence) of the neutrino oscillation probability $P_{\mu \mu}$ in [[Fig.]{} \[fig:allprobs\]]{} for some characteristic examples. Earlier in [Sec.]{} \[sec:dampedprob\] we have already discussed that there are two general interesting cases: Either only the oscillatory terms are damped or all terms are damped. In [[Fig.]{} \[fig:allprobs\]]{}, we can clearly identify this difference between the decoherence-like and the other two damping models (decay and oscillations). In all the shown cases (for which $\gamma>0$), the relative importance of the damping increases as the energy decreases. However, since also the neutrino oscillation probability drops with lower energies, the absolute size of the effect is determined by the ratio of signature versus probability effect for low energies. In addition, cross-section and flux will disfavor low energies, which means that the low-energy effects become even harder to identify. This makes the wave packet decoherence scenario most difficult to test, since the $E^{-4}$ dependence in the exponent strongly favors low energies. However, it might be most easily distinguished from the decay and oscillation damping scenarios because of its unique signature. As we have discussed after [[Eq.]{} (\[equ:faketheta13\])]{} \[which also holds for the similar [[Eq.]{} (\[equ:pmumudamped\])]{}\], it is a characteristic feature of decoherence-like signatures that they cross the undamped curve at $2E_{\mathrm{max}}/3$ and $2E_{\mathrm{max}}$, which here evaluate to $4 \, \mathrm{GeV}$ (outside of the analysis range) and $12 \, \mathrm{GeV}$. In [[Fig.]{} \[fig:allprobs\]]{} (left panel), this effect is hardly observable because of the $E^{-4}$ energy dependence, but the quantum decoherence motivated case “Quantum decoherence II” from [Table]{} \[tab:models\] clearly shows this behavior because of an $E^{-2}$ energy dependence. As far as the other two signatures are concerned, the decay damping has a linear energy dependence in the exponent as opposed to the quadratic one for the oscillation damping scenario. Therefore, one has the strongest high-energy effect for the decay damping scenario.
-------------------- ------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------
Decoherence Decay Oscillations Absorption Q. decoh. I Q. decoh. II
Fit signature $\frac{\sigma_E}{\mathrm{GeV}}$ $\gtrsim$ $\frac{\alpha}{10^{-5} \, \mathrm{\frac{GeV}{km}}}\gtrsim$ $\frac{\epsilon}{10^{-7} \, \mathrm{eV}^4} \gtrsim$ $\frac{\alpha}{\frac{10^{-8}}{\mathrm{GeV \, km}}} \, \gtrsim$ $\frac{\alpha}{\frac{10^{-10}}{\mathrm{GeV^2 \, km}}} \gtrsim$ $\frac{\kappa}{\frac{10^{24}}{ \mathrm{eV}^2}} \gtrsim$
\[3mm\] No damping 1.7 (2.8) 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) 2.0 (3.6)
Decoherence - 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) 2.0 (3.6)
Decay 1.7 (2.8) - 6.3 (10) 3.4 (5.7) 6.0 (10) 2.6 (5.1)
Oscillations 1.7 (2.8) 5.8 (9.8) - 1.9 (3.2) 4.1 (6.9) 13 (17)
Absorption 1.7 (2.8) 7.8 (13) 5.2 (8.5) - 24 (40) 2.1 (3.8)
Q. decoh. I 1.7 (2.8) 6.3 (11) 5.1 (8.3) 11 (19) - 2.1 (3.7)
Q. decoh. II 1.7 (2.8) 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) -
All models 1.7 (2.8) 7.8 (13) 6.3 (10) 11 (19) 24 (40) 13 (17)
-------------------- ------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------
: [\[tab:results\] Parameter sensitivity limits for which the simulated models (in columns) from [Table]{} \[tab:models\] could be distinguished from the fit models (in rows) at the $3 \sigma$ ($5
\sigma$) confidence level (for the experiment simulation [[NuFact-II]{}]{}from [Ref.]{} [@Huber:2002mx]). For example, decoherence could be established against all models (including standard oscillations) for the simulated $\sigma_E \gtrsim 1.7 \, \mathrm{GeV}$. For the simulated neutrino oscillation parameter values, we use the same values as in [[Fig.]{} \[fig:reactorprobs\]]{} and ${\sin^2(2 \theta_{13})}=0$ as given in the column captions. The fit parameter values (including the model parameter $\alpha$) are marginalized over. The row “no damping” corresponds to the standard neutrino oscillation scenario, [[*i.e.*]{}]{}, it corresponds to the upper bounds for the parameters assuming that there is only one non-standard effect. The row “All models” corresponds to the most conservative case, [[*i.e.*]{}]{}, it is an estimate for how well one can establish the model against all of the other shown models.]{}
In order to test the different models against each other, we use a modified version of the GLoBES software [@Huber:2004ka] and the neutrino factory setup [[NuFact-II]{}]{} from [Ref.]{} [@Huber:2002mx]. This neutrino factory uses a $50 \, \mathrm{kt}$ magnetized iron detector, $4 \, \mathrm{year}$ of running time in each polarity, and $4 \,
\mathrm{MW}$ target power (corresponding to $5.3 \cdot 10^{20}$ useful muon decays per year). For a fixed set of simulated parameter values including the simulated damping parameter $\alpha$, we marginalize over the fit neutrino oscillation parameters including the fit damping parameter. Due to the complexity of the parameter space, we assume that the $\mathrm{sgn}({\Delta m_{31}^2})$-degeneracy has been resolved by this time. We define the sensitivity limit to $\alpha$ as the threshold above which the simulated damping model could be distinguished from the fit damping model. Thus, if the damping mechanism is really there, then the damping parameter $\alpha$ has to be above this threshold in order to establish the model against the fit model with the given experiment. In particular, we include the fit damping model “no damping”, which corresponds to the standard neutrino oscillation case. For the simulation, we impose external precisions of 10 % on each $\theta_{12}$ and ${\Delta m_{21}^2}$ [@Gonzalez-Garcia:2001zy; @Barger:2000hy]. In addition, we assume a constant matter density profile with 5 % uncertainty, which takes into account matter density uncertainties as well as matter density profile effects [@Geller:2001ix; @Ohlsson:2003ip; @Pana]. However, we assume that the neutrino factory itself measures ${\Delta m_{31}^2}$ and $\theta_{23}$ with its disappearance channel, [[*i.e.*]{}]{}, we do not impose an external precision on these parameters.
The resulting sensitivity limits of this analysis are shown in [Table]{} \[tab:results\], where the columns correspond to the simulated models and the rows correspond to the fit models. These results are computed for ${\sin^2(2 \theta_{13})}=0$. It turns out that for a simulated value of ${\sin^2(2 \theta_{13})}$ close to the CHOOZ bound ${\sin^2(2 \theta_{13})}\simeq 0.1$, the limits on $\alpha$ would improve up to about 30 % \[depending on model and value of ${\sin^2(2 \theta_{13})}$\] because of the additional contribution from the appearance signal.[^5] Let us first of all discuss the resulting sensitivities against the standard neutrino oscillation scenario for some simple cases. For decoherence, the obtained numbers indeed correspond very well to the energy resolution of the detector, which is about 15 % of the neutrino energy, [[*i.e.*]{}]{}, $1.5 \, \mathrm{GeV}$ for a neutrino energy of $E=10 \,
\mathrm{GeV}$, where the major effect takes place ([[*cf.*]{}]{}, [[Fig.]{} \[fig:allprobs\]]{}, left plot). Since the neutrino oscillation probability changes sufficiently fast in this region, the measurement is limited by the energy resolution of the detector. In the wave packet approach, the bound against the “no damping” model $\sigma_E \lesssim 1.67 \, \mathrm{GeV}$ translates into $\sigma_x
\gtrsim 6 \cdot 10^{-17} \, \mathrm{m}$. This rather small number (sub-nucleon size) means that the bound is not very useful for wave packet decoherence, since it is virtually impossible to create such sharply peaked wave packets. However, there might be other energy averaging effects that can be constrained. For decay, we obtain a limit, against the standard model, which is comparable to the current neutrino lifetime limit for $m_3$. Note that we have included all correlations with the neutrino oscillation parameters in this limit. However, the limit would be a factor of two weaker if we considered only decay of $m_3$ instead of all mass eigenstates. Since there are quite strong bounds on the $m_1$ and $m_2$ lifetimes from supernova and solar neutrino observations, this factor of two difference should be a very good approximation for the actual limit. For the oscillation signature, the obtained limits are of the order of magnitude $5 \cdot 10^{-7} \, \mathrm{eV}^2$, which corresponds to $(\Delta m_{43}^2)^2$ times the active-sterile mixing in our estimate for a possible mass scheme ([[*cf.*]{}]{}, [Sec.]{} \[sec:oscsteriles\]). Considering the $\Delta m_{43}^2$ dependence, this is in fact not a very strong bound. However, note that we have taken into account the full parameter correlation, [[*i.e.*]{}]{}, this effect could not come from ${\sin^2(2 \theta_{13})}$ or any other standard parameter.
![\[fig:corrplot\] The impact of different correlations on the statistics (and systematics) sensitivity limit of the model dependent parameter $\alpha$ ($3 \sigma$), where the horizontal axis represents multiples of the statistics (and systematics) sensitivity limit. The group captions refer to the simulated models and the bar labels to the fit models, where only the fit models are shown which affect the sensitivity limit more than by 5 %. The dark bars represent the correlations with the neutrino oscillation parameters (fit parameter $\alpha=0$ fixed) and the light bars indicate the additional change if the model specific fit parameter $\alpha$ is marginalized over. The lowest light bar extends to $37$.](corrplot){width="10cm"}
In order to discuss the general identification problem among different damping signatures, some information can be obtained from [Table]{} \[tab:results\]. In addition, in [[Fig.]{} \[fig:corrplot\]]{} we show the impact of the correlations with the standard neutrino oscillation parameters (dark bars) as well as the additional correlation with the fit model parameter $\alpha$ (light bars) on the $\alpha$ sensitivity limit for the simulated models from [Table]{} \[tab:models\]. The horizontal axis shows the ratio of the $\alpha$ sensitivity limit including correlations to the one from statistics and systematics only (which corresponds to $1$), where we only include fit models with relevant model parameter contributions. Two models are highly correlated if a possible signature in one model can be compensated by a change of parameter(s) in the other. Since we include the standard neutrino oscillation scenario in all models, a small change in the fit neutrino oscillation parameters might also compensate a damping signature within the measurement precision of the experiment. Therefore, we include for all signatures the standard neutrino oscillation parameter correlation as dark bars, [[*i.e.*]{}]{}, the dark bars represent the fit against the standard neutrino oscillation scenario (for fixed fit parameter $\alpha$), and the light bars are a measure for the additional problem to distinguish a non-standard signature from the ones of other possible non-standard models. The interpretation of these bars is as follows: The dark bars reflect the limit (right edges) for $\alpha$ (as multiple of the statistics limit) beyond which the non-standard signature could be distinguished from the standard neutrino oscillation case at the $3 \sigma$ confidence level. However, if $\alpha$ should be within one of the light bar ranges, then it could not be uniquely identified, since it could also well be the non-standard signature corresponding to this bar. From [[Fig.]{} \[fig:corrplot\]]{}, we make a number of interesting observations:
- Signatures which have negative $\gamma$’s (Absorption and Quantum decoherence I) are almost not affected by correlations with the neutrino oscillation parameters, [[*i.e.*]{}]{}, they cannot be explained by different neutrino oscillation parameter values. In these cases, the spectrum is more suppressed for large values of $E$ than for small values, which means that the signature behaves unlike an oscillation signature corresponding to $\gamma = 2$. However, it is difficult to identify which of these models is realized.
- Signatures with $\gamma=2$ (Oscillations into $\nu_s$ and Quantum decoherence II) are highly affected by correlations with the standard neutrino oscillation parameters, since the signatures have an energy dependence similar to the oscillation signature. Similar signatures, such as decay, can enhance this correlation.
- Unique signatures (Wave packet decoherence and Neutrino decay) can easily be distinguished from all the other models. Although there could be some correlations with similar signatures for neutrino decay, the absolute impact on the $\alpha$ sensitivity limit is comparatively small (up to a factor of three).
Summary and conclusions {#sec:summary}
=======================
We have introduced exponential damping factors in the neutrino oscillation probabilities, which lead to distinctive signatures, [[*i.e.*]{}]{}, energy dependent damping effects in the energy spectrum. These damping factors are one approach to test non-oscillation effects on the neutrino oscillation probability level. They can be motivated by many different models such as intrinsic wave packet decoherence, neutrino decay, oscillations into sterile neutrinos, neutrino absorption, quantum decoherence, [[*etc.*]{}]{}. They describe the second order contributions of small possible “non-standard” corrections to the three-flavor neutrino oscillation framework (in vacuum as well as in matter) on a rather abstract level. As opposed to tests of probability conservation, the damping factors can, in addition, describe a damping of the oscillating terms (which preserves the total probability) as well as they imply, by their energy dependence, some information on the type of effect. We have demonstrated how damping factors can modify the neutrino oscillation probabilities relevant for future high-precision short- and long-baseline experiments, since these experiments might be most sensitive to very small spectral effects.
As one application, we have shown that decoherence-like damping signatures can severely modify the interpretation of experiments, where we have chosen wave packet decoherence damping at new short-baseline reactor experiments as an example. In this case, two competing small effects, namely the effect of a non-zero value of ${\sin^2(2 \theta_{13})}$ and a damping contribution, might be mixed up. In particular, the damping could fake a value of ${\sin^2(2 \theta_{13})}$ which is much smaller than the value provided by Nature. Such a ${\sin^2(2 \theta_{13})}$ suppression effect can either be intrinsic (such as quantum decoherence), experiment dependent (such as some averaging effect not taken into account), or both (such as wave packet decoherence related to the production and detection processes). Intrinsic effects will be observable by all types of experiments, which means that there are very stringent limits available from existing data as well as future experiments will test the consistency of the picture. On the other hand, experiment dependent effects can only be checked by complementary techniques measuring the same quantity. One such complementary pair has, in the past, been the solar and long-baseline reactor experiments. In the future, it will therefore be very important to measure ${\sin^2(2 \theta_{13})}$ by reactor experiments and superbeams as complementary techniques, since one of them alone could fail for such experiment dependent effects. Eventually, the LSND experiment could be a strong hint for such an experiment dependent effect if it is rejected by the MiniBooNE experiment.
One of the most interesting features of damping signatures are their characteristic spectral (energy) dependencies, which can act as a “fingerprint” for many sources of non-oscillation effects. For example, specific signatures could point to new interesting physics beyond the standard model. We have therefore discussed how large the effects from different damping signatures have to be in order to be identified and how well these damping signatures could be distinguished for the example of neutrino factories. In some cases, such damping signatures can be compensated by a shift of the neutrino oscillation parameters, which means that given such a damping effect, it is quite likely to obtain an erroneous determination of these parameters. However, if the damping effects are strong enough, then an establishment of non-oscillation effects will be possible. Once such a damping effect is established, it will be very interesting to know from which non-standard mechanism it actually arises. Given this question of the identification problem, we have found that signatures with a damping similar to $\exp ( - \alpha L^\beta/ E^\gamma)$, $\gamma = 1,2,\hdots$ are strongly correlated (peaking at $\gamma=2$) with the standard neutrino oscillation parameters, [[*i.e.*]{}]{}, it is difficult to distinguish them from small adjustments in the neutrino oscillation parameters. However, damping signatures similar to $\exp (
- \alpha L^\beta E^2 )$ can be very easily disentangled from the neutrino oscillation parameters, but it is difficult to distinguish them from each other. It is also extremely difficult to establish a damping of the oscillations against a damping of the probabilities with the same spectral index $\gamma$ because of the correlations with the neutrino oscillation parameters.
Finally, we conclude that spectral tests of damping signatures in neutrino oscillation probabilities are an important test of the consistency of the three-flavor neutrino oscillation picture. If any deviation from this picture is found, then the most important question will be what sort of effect we are dealing with. Exactly this information could be provided by the spectral dependence of the damping signature, which means that this approach could be an important test of physics beyond the standard model.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We would like to thank John Bahcall, Manfred Lindner, and Thomas Schwetz for useful discussions.
T.O. and W.W. would like to thank the IAS and the KTH respectively for the warm hospitality and the financial support during their respective research visits.
This work was supported by the Royal Swedish Academy of Sciences (KVA), the Swedish Research Council (Vetenskapsr[å]{}det), Contract Nos. 621-2001-1611, 621-2002-3577, the G[ö]{}ran Gustafsson Foundation, the Magnus Bergvall Foundation, the W. M. Keck Foundation, and NSF grant PHY-0070928.
[10]{}
Super-Kamiokande, Y. Fukuda et al., *Evidence for oscillation of atmospheric neutrinos*, Phys. Rev. Lett. **81** (1998), 1562, `hep-ex/9807003`.
SNO, Q. R. Ahmad et al., *Direct evidence for neutrino flavor transformation from neutral-current interactions in the [Sudbury Neutrino Observatory]{}*, Phys. Rev. Lett. **89** (2002), 011301, `nucl-ex/0204008`.
SNO, S. N. Ahmed et al., *Measurement of the total active $^8$[B]{} solar neutrino flux at the [Sudbury Neutrino Observatory]{} with enhanced neutral current sensitivity*, Phys. Rev. Lett. **92** (2004), 181301, `nucl-ex/0309004`.
K2K, M. H. Ahn et al., *Indications of neutrino oscillation in a 250-km long-baseline experiment*, Phys. Rev. Lett. **90** (2003), 041801, `hep-ex/0212007`.
KamLAND, K. Eguchi et al., *First results from [KamLAND]{}: Evidence for reactor anti- neutrino disappearance*, Phys. Rev. Lett. **90** (2003), 021802, `hep-ex/0212021`.
KamLAND, T. Araki et al., *Measurement of neutrino oscillation with [KamLAND]{}: Evidence of spectral distortion*, `hep-ex/0406035`.
Super-Kamiokande, Y. Ashie et al., *Evidence for an oscillatory signature in atmospheric neutrino oscillation*, Phys. Rev. Lett. **93** (2004), 101801, `hep-ex/0404034`.
C. Giunti and C. W. Kim, *Coherence of neutrino oscillations in the wave packet approach*, Phys. Rev. **D58** (1998), 017301, `hep-ph/9711363`.
C. Giunti, *Coherence and wave packets in neutrino oscillations*, Found. Phys. Lett. **17** (2004), 103, `hep-ph/0302026`.
C. Giunti, C. W. Kim, and U. W. Lee, *Coherence of neutrino oscillations in vacuum and matter in the wave packet treatment*, Phys. Lett. **B274** (1992), 87.
W. Grimus, P. Stockinger, and S. Mohanty, *The field-theoretical approach to coherence in neutrino oscillations*, Phys. Rev. **D59** (1999), 013011, `hep-ph/9807442`.
C. Y. Cardall, *Coherence of neutrino flavor mixing in quantum field theory*, Phys. Rev. **D61** (2000), 073006, `hep-ph/9909332`.
J. N. Bahcall, N. Cabibbo, and A. Yahil, *Are neutrinos stable particles?*, Phys. Rev. Lett. **28** (1972), 316.
V. Barger, W. Y. Keung, and S. Pakvasa, *Majoron emission by neutrinos*, Phys. Rev. **D25** (1982), 907.
J. W. F. Valle, *Fast neutrino decay in horizontal [Majoron]{} models*, Phys. Lett. **B131** (1983), 87.
V. Barger, J. G. Learned, S. Pakvasa, and T. J. Weiler, *Neutrino decay as an explanation of atmospheric neutrino observations*, Phys. Rev. Lett. **82** (1999), 2640, `astro-ph/9810121`.
S. Pakvasa, *Do neutrinos decay?*, AIP Conf. Proc. **542** (2000), 99, `hep-ph/0004077`.
V. Barger et al., *Neutrino decay and atmospheric neutrinos*, Phys. Lett. **B462** (1999), 109, `hep-ph/9907421`.
M. Lindner, T. Ohlsson, and W. Winter, *A combined treatment of neutrino decay and neutrino oscillations*, Nucl. Phys. **B607** (2001), 326, `hep-ph/0103170`.
M. Lindner, T. Ohlsson, and W. Winter, *Decays of supernova neutrinos*, Nucl. Phys. **B622** (2002), 429, `astro-ph/0105309`.
A. Strumia, *Interpreting the [LSND]{} anomaly: Sterile neutrinos or [CPT]{}-violation or …?*, Phys. Lett. **B539** (2002), 91, `hep-ph/0201134`.
M. Maltoni, T. Schwetz, M. A. Tortola, and J. W. F. Valle, *Status of global fits to neutrino oscillations*, New J. Phys. **6** (2004), 122, `hep-ph/0405172`.
A. De Rújula, S. L. Glashow, R. R. Wilson, and G. Charpak, *Neutrino exploration of the [Earth]{}*, Phys. Rept. **99** (1983), 341.
E. Lisi, A. Marrone, and D. Montanino, *Probing possible decoherence effects in atmospheric neutrino oscillations*, Phys. Rev. Lett. **85** (2000), 1166, `hep-ph/0002053`, and references therein.
F. Benatti and R. Floreanini, *Open system approach to neutrino oscillations*, JHEP **02** (2000), 032, `hep-ph/0002221`.
S. L. Adler, *Comment on a proposed [Super-Kamiokande]{} test for quantum gravity induced decoherence effects*, Phys. Rev. **D62** (2000), 117901, `hep-ph/0005220`.
T. Ohlsson, *Equivalence between neutrino oscillations and neutrino decoherence*, Phys. Lett. **B502** (2001), 159, `hep-ph/0012272`.
F. Benatti and R. Floreanini, *Massless neutrino oscillations*, Phys. Rev. **D64** (2001), 085015, `hep-ph/0105303`.
A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *A study on quantum decoherence phenomena with three generations of neutrinos*, `hep-ph/0208166`.
G. Barenboim and N. E. Mavromatos, *[CPT]{} violating decoherence and [LSND]{}: A possible window to [Planck]{} scale physics*, JHEP **01** (2005), 034, `hep-ph/0404014`.
G. Barenboim and N. E. Mavromatos, *Decoherent neutrino mixing, dark energy and matter-antimatter asymmetry*, Phys. Rev. **D70** (2004), 093015, `hep-ph/0406035`.
D. Morgan, E. Winstanley, J. Brunner, and L. F. Thompson, *Probing quantum decoherence in atmospheric neutrino oscillations with a neutrino telescope*, `astro-ph/0412618`.
J. W. F. Valle, *Standard and non-standard neutrino oscillations*, J. Phys. **G29** (2003), 1819, and references therein.
LSND, A. Aguilar et al., *Evidence for neutrino oscillations from the observation of $\bar\nu_e$ appearance in a $\bar\nu_\mu$ beam*, Phys. Rev. **D64** (2001), 112007, `hep-ex/0104049`.
V. Barger, S. Geer, and K. Whisnant, *Neutral currents and tests of three-neutrino unitarity in long-baseline experiments*, New J. Phys. **6** (2004), 135, `hep-ph/0407140`.
A. Donini, D. Meloni, and P. Migliozzi, *The silver channel at the neutrino factory*, Nucl. Phys. **B646** (2002), 321, `hep-ph/0206034`.
Y. Farzan and A. Y. Smirnov, *Leptonic unitarity triangle and [CP]{}-violation*, Phys. Rev. **D65** (2002), 113001, `hep-ph/0201105`.
H. Zhang and Z.-z. Xing, *Leptonic unitarity triangles in matter*, `hep-ph/0411183`.
P. Huber and J. W. F. Valle, *Non-standard interactions: Atmospheric versus neutrino factory experiments*, Phys. Lett. **B523** (2001), 151, `hep-ph/0108193`.
P. Huber, T. Schwetz, and J. W. F. Valle, *How sensitive is a neutrino factory to the angle $\theta_{13}$?*, Phys. Rev. Lett. **88** (2002), 101804, `hep-ph/0111224`.
P. Huber, T. Schwetz, and J. W. F. Valle, *Confusing non-standard neutrino interactions with oscillations at a neutrino factory*, Phys. Rev. **D66** (2002), 013006, `hep-ph/0202048`.
J. N. Bahcall, *The central temperature of the [Sun]{} can be measured via the ${}^7$[Be]{} solar neutrino line*, Phys. Rev. Lett. **71** (1993), 2369, `hep-ph/9309292`.
J. N. Bahcall, *The ${}^7$[Be]{} solar neutrino line: A reflection of the central temperature distribution of the [Sun]{}*, Phys. Rev. **D49** (1994), 3923, `astro-ph/9401024`.
E. Fiorini, *Cryogenic thermal detectors in subnuclear physics and astrophysics*, Physica B: Condensed Matter **169** (1991), 388.
A. Alessandrello et al., *A bromine cryogenic detector for solar and non solar neutrino spectroscopy*, Astropart. Phys. **3** (1995), 239.
Z. G. Berezhiani and M. I. Vysotsky, *Neutrino decay in matter*, Phys. Lett. **B199** (1987), 281.
C. Giunti, C. W. Kim, U. W. Lee, and W. P. Lam, *Majoron decay of neutrinos in matter*, Phys. Rev. **D45** (1992), 1557.
E. K. Akhmedov, R. Johansson, M. Lindner, T. Ohlsson, and T. Schwetz, *Series expansions for three-flavor neutrino oscillation probabilities in matter*, JHEP **04** (2004), 078, `hep-ph/0402175`.
K. Kiers, S. Nussinov, and N. Weiss, *Coherence effects in neutrino oscillations*, Phys. Rev. **D53** (1996), 537, `hep-ph/9506271`.
C. Giunti, *Coherence in neutrino interactions*, `hep-ph/0302045`.
H. J. Lipkin, *Quantum mechanics of neutrino detectors determine coherence and phases in oscillation experiments*, `hep-ph/0312292`.
S. Dutta, R. Gandhi, and B. Mukhopadhyaya, *nu/tau appearance searches using neutrino beams from muon storage rings*, Eur. Phys. J. **C18** (2000), 405–416, `hep-ph/9905475`.
E. A. Paschos and J. Y. Yu, *Neutrino interactions in oscillation experiments*, Phys. Rev. **D65** (2002), 033002, `hep-ph/0107261`.
S. Kretzer and M. H. Reno, *Tau neutrino deep inelastic charged current interactions*, Phys. Rev. **D66** (2002), 113007, `hep-ph/0208187`.
R. Gandhi, C. Quigg, M. H. Reno, and I. Sarcevic, *Neutrino interactions at ultrahigh energies*, Phys. Rev. **D58** (1998), 093009, `hep-ph/9807264`.
G. Lindblad, *On the generators of quantum dynamical semigroups*, Commun. Math. Phys. **48** (1976), 119.
A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *Quantum dissipative effects and neutrinos: Current constraints and future perspectives*, Phys. Rev. **D63** (2001), 073001, `hep-ph/0009222`.
A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *On the quest for the dynamics of nu/mu –> nu/tau conversion*, Phys. Rev. **D63** (2001), 113013, `hep-ph/0010092`.
J. Schechter and J. W. F. Valle, *Neutrino masses in [$SU(2) \times U(1)$]{} theories*, Phys. Rev. **D22** (1980), 2227.
J. Schechter and J. W. F. Valle, *Neutrino-oscillation thought experiment*, Phys. Rev. **D23** (1981), 1666.
G. R. Dvali and A. Y. Smirnov, *Probing large extra dimensions with neutrinos*, Nucl. Phys. **B563** (1999), 63, `hep-ph/9904211`.
R. N. Mohapatra, S. Nandi, and A. P[é]{}rez-Lorenzana, *Neutrino masses and oscillations in models with large extra dimensions*, Phys. Lett. **B466** (1999), 115, `hep-ph/9907520`.
R. Barbieri, P. Creminelli, and A. Strumia, *Neutrino oscillations and large extra dimensions*, Nucl. Phys. **B585** (2000), 28, `hep-ph/0002199`.
R. N. Mohapatra and A. P[é]{}rez-Lorenzana, *Three flavour neutrino oscillations in models with large extra dimensions*, Nucl. Phys. **B593** (2001), 451, `hep-ph/0006278`.
T. H[ä]{}llgren, T. Ohlsson, and G. Seidl, *Neutrino oscillations in deconstructed dimensions*, JHEP (to be published), `hep-ph/0411312`.
CHOOZ, M. Apollonio et al., *Limits on neutrino oscillations from the [CHOOZ]{} experiment*, Phys. Lett. **B466** (1999), 415, `hep-ex/9907037`.
CHOOZ, M. Apollonio et al., *Search for neutrino oscillations on a long base-line at the [CHOOZ]{} nuclear power station*, Eur. Phys. J. **C27** (2003), 331, `hep-ex/0301017`.
F. Ardellier et al., *Letter of intent for [Double Chooz]{}: A search for the mixing angle $\theta_{13}$*, `hep-ex/0405032`.
T. Schwetz, *Variations on [KamLAND]{}: Likelihood analysis and frequentist confidence regions*, Phys. Lett. **B577** (2003), 120, `hep-ph/0308003`.
H. Minakata, H. Sugiyama, O. Yasuda, K. Inoue, and F. Suekane, *Reactor measurement of $\theta_{13}$ and its complementarity to long-baseline experiments*, Phys. Rev. **D68** (2003), 033017, `hep-ph/0211111`.
P. Huber, M. Lindner, T. Schwetz, and W. Winter, *Reactor neutrino experiments compared to superbeams*, Nucl. Phys. **B665** (2003), 487, `hep-ph/0303232`.
G. L. Fogli, E. Lisi, A. Marrone, and D. Montanino, *Status of atmospheric $\nu_\mu \rightarrow \nu_\tau$ oscillations and decoherence after the first [K2K]{} spectral data*, Phys. Rev. **D67** (2003), 093006, `hep-ph/0303064`.
J. N. Bahcall, M. C. Gonzalez-Garcia, and C. Pe$\tilde{\mathrm{n}}$a-Garay, *Solar neutrinos before and after [Neutrino 2004]{}*, JHEP **08** (2004), 016, `hep-ph/0406294`.
A. Bandyopadhyay, S. Choubey, S. Goswami, S. T. Petcov, and D. P. Roy, *Update of the solar neutrino oscillation analysis with the 766-[Ty]{} [KamLAND]{} spectrum*, (2004), `hep-ph/0406328`.
M. C. Gonzalez-Garcia and C. Pe$\tilde{\mathrm{n}}$a-Garay, *On the effect of $\theta_{13}$ on the determination of solar oscillation parameters at [KamLAND]{}*, Phys. Lett. **B527** (2002), 199, `hep-ph/0111432`.
V. D. Barger, D. Marfatia, and B. P. Wood, *Resolving the solar neutrino problem with [KamLAND]{}*, Phys. Lett. **B498** (2001), 53, `hep-ph/0011251`.
P. Huber, M. Lindner, and W. Winter, *Simulation of long-baseline neutrino oscillation experiments with [GLoBES]{}*, Comp. Phys. Comm. (to be published), `hep-ph/0407333`.
P. Huber, M. Lindner, and W. Winter, *Superbeams versus neutrino factories*, Nucl. Phys. **B645** (2002), 3, `hep-ph/0204352`.
R. J. Geller and T. Hara, *Geophysical aspects of very long baseline neutrino experiments*, Phys. Rev. Lett. **49** (2001), 98, `hep-ph/0111342`.
T. Ohlsson and W. Winter, *The role of matter density uncertainties in the analysis of future neutrino factory experiments*, Phys. Rev. **D68** (2003), 073007, `hep-ph/0307178`.
S. V. Panasyuk, *[REM (Reference Earth Model)]{} web page*, , 2000.
[^1]: Although it will be possible to describe some of our effects on Hamiltonian level, the Hamiltonian will not be Hermitian anymore.
[^2]: For instance, some effect on Hamiltonian level, such as neutrino absorption, would require a full re-diagonalization of the effective Hamiltonian with the absorption terms included, see the section “Neutrino absorption” below.
[^3]: In general, we do not change the symbol for $\alpha$ if its is exactly the same as the one in [[Eq.]{} (\[equ:dfactor\])]{}. However, if there are additional factors absorbed in $\alpha$, then we re-define the name (such as for wave packet decoherence).
[^4]: Because of the higher $\tau$ production threshold, the $\nu_e$ and $\nu_\mu$ cross-sections are in fact considerably larger than the $\nu_\tau$ cross-section [@Dutta:1999jg; @Paschos:2001np; @Kretzer:2002fr]. However, for these low energies the standard absorption effects are anyway small.
[^5]: We do not show these results, since the exact interpretation of the appearance signal is model dependent. In addition, matter effects are strong in this case and they depend on the treatment of those in the context of the damping model.
|
---
abstract: 'A systematic error in the extraction of $\sin^2 \theta_W$ from nuclear deep inelastic scattering of neutrinos and antineutrinos arises from higher-twist effects arising from nuclear shadowing. We explain that these effects cause a correction to the results of the recently reported significant deviation from the Standard Model that is potentially as large as the deviation claimed, and of a sign that cannot be determined without an extremely careful study of the data set used to model the input parton distribution functions.'
author:
- |
Gerald A. Miller\
University of Washington Seattle, WA 98195-1560
- |
A. W. Thomas\
Department of Physics and Mathematical Physics, and Special Research Centre for the Subatomic Structure of Matter\
Univ. of Adelaide, Adelaide 5005, Australia
title: 'Shadowing Corrections and the Precise Determination of Electroweak Parameters in Neutrino-Nucleon Scattering'
---
\#1\#2\#3 [[ Nucl. Phys.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Proc. Cam. Phil. Soc.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Phys. Lett.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Phys. Lett.]{} [**\#1**]{}, \#2 (\#3); ]{} \#1\#2\#3 [[ Phys. Rep.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Phys. Rev.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Phys. Rev. Lett.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Proc. Roy. Soc.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Prog. Th. Phys.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Rev. Mod. Phys.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Rep. Prog. Phys.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Zeit. Phys.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Eur. Phys. Jour.]{} [**\#1**]{}, \#2 (\#3). ]{} \#1\#2\#3 [[ Nucl. Instr. Meth.]{} [**\#1**]{}, \#2 (\#3). ]{}
0.5cm
In a recent and stimulating paper[@Zeller:2001hh], the NuTeV collaboration reported a determination of $\sin^2 \theta_W$, based on a comparison of charged and neutral current neutrino interactions with a nuclear target (Fe), which differs from the Standard Model prediction by three standard deviations. In view of the importance of such a result it is vital that the sources of systematic error be clearly identified and examined. Here we explain that there is a nuclear correction, arising from the higher-twist effects of nuclear shadowing[@Boros:1998es; @Boros:1998mt], for which no allowance has been made in the NuTeV analysis. This correction may well be of the same size as the reported deviation.
The measurement under consideration involves the separate measurements of the ratios of neutral current (NC) to charged current (CC) cross sections on Fe for $\nu$ and $\bar \nu$. The best values of ${\mbox{$\sin^2\theta_W$}}$ and $\rho_0$ are extracted from the precisely determined ratios. But the nuclear effects must be removed. Because a substantial fraction of the NuTeV data in the region $x$ below 0.1 is at relatively low $Q^2$ (even though the average $Q^2$ is 16 GeV$^2$), one expects a significant shadowing contribution from vector meson dominance (VMD)[@Kwiecinski:ys; @Melnitchouk:1995am], which is of higher twist. As explained by Boros [*et al.*]{}[@Boros:1998es; @Boros:1998mt], the effect of the VMD contribution to nuclear shadowing in neutrino interactions is substantial, and leads to a reduction of the CC $\nu$ cross section by about 50% compared with the reduction found for photons. (Briefly, the VMD contribution to shadowing is dominated by the $\rho$ meson and $f^2_{\rho^+} = 2
f^2_{\rho^0}$, whereas the CC to photon cross sections are in the ratio 18/5.) Together with a full NLO analysis of the data, this was important in reconciling the NuTeV and NMC data without any need for substantial charge symmetry violation of the parton distributions [@Boros:1999fy]. A recent re-examination of the role of vector meson dominance in nuclear shadowing at low $Q^2$ finds that models (such as that used here) which incorporate both vector meson and partonic mechanisms are consistent with both the magnitude and the $Q^2$ slope of the shadowing data [@Melnitchouk:2002ud].
For present purposes we need also to consider this higher-twist effect of shadowing of vector mesons for ${\overline{\nu}}$ interactions. These involve predominantly anti-quarks and the shadowing effect is relatively larger by a factor of three or so. However, the VMD contribution to shadowing for neutral current interactions is 1/2 of that for charged current interactions because $Z$ conversion to a $\rho^0$ occurs with a factor of $(1/2)$ of that for $W^+\to\rho^+$.
Let us examine how these differences in shadowing effects influence the extraction of ${\mbox{$\sin^2\theta_W$}}$. Suppose the nuclear cross section for NC interactions of neutrinos is larger than that for CC interactions by a factor of ${1+{1\over2}\epsilon}$ and that the one for anti-neutrinos is larger by a factor of ${1+{1\over2}\overline{\epsilon}}$, with $\overline{\epsilon}$ expected to be substantially larger than $\epsilon$. Then the nuclear ratios of neutral current (NC) to charged current (CC) cross sections are $$\begin{aligned}
&& R^{\nu}_A \equiv \frac{\sigma_A(\nu A\rightarrow\nu X)}
{\sigma_A(\nu A\rightarrow\ell^{-}X)}
= \frac{\sigma(\nu N\rightarrow\nu X)}
{\sigma(\nu N\rightarrow\ell^{-}X)}({1+{1\over2}\epsilon})
= ({1+{1\over2}\epsilon})(g_L^2+r\;g_R^2),\\
&& R^{({\overline{\nu}})}_A \equiv \frac{\sigma_A({\overline{\nu}}A\rightarrow{\overline{\nu}}X)}
{\sigma_A({\overline{\nu}}A\rightarrow\ell^{(+)}X)}
= ({1+{1\over2}\overline{\epsilon}})(g_L^2+r^{(-1)}g_R^2),
\label{eqn:ls}\end{aligned}$$ where $r= {\sigma({\overline \nu}N\rightarrow\ell^+X)}/
{\sigma(\nu N\rightarrow\ell^-X)}
,\;r_A=r({1+{1\over2}\epsilon})/({1+{1\over2}\overline{\epsilon}}),\;
$ $g_L^2=1/2-{\mbox{$\sin^2\theta_W$}}+5/9\sin^4\theta_W$ and $g_R^2= 5/9\sin^4\theta_W$. Equations (1) and (2) tell us that the nuclear-shadowing corrections for $ R^{\nu}_A$ and $R^{({\overline{\nu}})}_A$ are not the same, and that the extraction of ${\mbox{$\sin^2\theta_W$}}$ requires the separate knowledge of $\epsilon$ and $\overline{\epsilon}$.
A detailed analysis of the NuTeV data requires that one model the ratios $ R^{\nu}_A$ and $R^{({\overline{\nu}})}_A$ at a required accuracy of a fraction of a percent. This, in turn, requires an even more accurate knowledge of both the quark and antiquark parton distribution functions (pdfs). In general, the pdfs are derived from a global analysis data from electron/muon, CC neutrino and NC neutrino deep inelastic scattering on protons, deuterons and nuclei. The range of $Q^2$, particularly at low $x$ ($x \leq 0.1$) can be quite low. Higher twist shadowing corrections are almost universally ignored in global determinations of the pdfs. This is certainly the case for the pdfs used by NuTeV. Since the VMD shadowing corrections are different for electrons, CC neutrino scattering and NC neutrino scattering, the pdfs resulting from such a global analysis are at best an approximation to the true ones, with unknown systematic errors. Worse, one cannot simply add a shadowing correction to a simulation based on such global pdfs as even the sign of the correction will depend on the particular data sets included in the analysis. Of course, the systematic errors encountered will not be serious for most purposes. However, in this case, where the signal of a deviation from the Standard Model is at the percent level, one must control this potential source of error extremely carefully.
We cannot undo the global analysis of pdfs by the NuTeV collaboration. However, we can make an estimate of the order of magnitude of the shadowing corrections using Eqs. (1) and (2). The quantity $\epsilon$ is given as a function of $x$ for $Q^2=5 \;{\rm GeV}^2$ as the dashed curve of Fig 3b. in Ref. [@Boros:1998es]. One needs to take the deviation between dashed curve and unity. Thus, e.g. $\epsilon(x,Q^2= 5{\rm GeV}^2)\approx 0.041$ at $x=10^{-2}$. For other values of $Q^2$, one may use: $\epsilon (x,Q^2)\approx \epsilon(x,Q^2=
5{\rm GeV}^2) \left( {m_\rho^2+5 {\rm GeV}^2\over
m_\rho^2+Q^2} \right)^2 $. Furthermore $\overline\epsilon/{\epsilon}\approx F_2^D(\nu)/ F_2^D({\overline{\nu}})\approx2.$ Using a typical value of $Q^2\approx10 GeV^2$ and range of $x\approx 0.05$ of the NuTeV experiment leads to the values $\epsilon=0.006$ and $\overline{\epsilon}=0.012$[@kevin]. Thus the nuclear corrected ratios $ R^{\nu}_A$ and $ R^{{\overline{\nu}}}_A$ would be smaller than those reported in Ref. [@Zeller:2001hh], by 0.003 and 0.006 respectively. That these numbers represent important corrections can be seen immediately by examining the sources of errors reported in Ref. [@Zeller:2001hh]. The total error for $ R^{\nu}_A$ is reported as 0.0013 and that for $ R^{{\overline{\nu}}}_A$ as 0.00272. The effects of shadowing are larger than these quoted errors by a factor of two or three! It is clear that any analysis of nuclear data aimed at determining ${\mbox{$\sin^2\theta_W$}}$ must account for nuclear shadowing. But this has not been done in Ref. [@Zeller:2001hh].
It is necessary to carry out a more refined analysis of the data which properly incorporates the high twist components of nuclear shadowing, starting with the pdfs themselves. Such an analysis should also take into account the experimental acceptances as a function of $x$, $y$ and $Q^2$. Alternatively, one could drastically reduce the VMD contribution by restricting the data set to events with $Q^2>5\; {\rm GeV}^2$ – although we understand that this may present difficulties for NC events.
We find that the size of the shadowing effects is substantial and should be incorporated in the experimental analysis. It is true that the simplest estimate of this effect (ignoring the effects on the pdfs themselves, which was discussed above) is opposite to that required to explain the deviation from the Standard Model. Thus it may be that the deviation from the Standard Model could be even [*larger*]{} than reported in Ref. [@Zeller:2001hh].
Finally, we note that several other effects that tend to reduce the discrepancy have been reported. The influence of charge symmetry breaking, arising from the mass difference between up and down quarks[@csb], accounts for about a third to a half of the deviation between NuTeV’s value of ${\mbox{$\sin^2\theta_W$}}$ and that of the Standard Model in a model-independent manner[@ncsb] . Furthermore, it has been known for more than 20 years that parton distributions of nucleons bound in nuclear matter differ from those of free nucleons. Such effects[@emc] still present a considerable challenge to our understanding of nonperturbative QCD and it is not inconceivable that they could eventually account for the entire deviation of ${\mbox{$\sin^2\theta_W$}}$.
It seems clear that the extraction of the the value of ${\mbox{$\sin^2\theta_W$}}$ from neutrino-nuclear interactions involves handling several different types of corrections of different signs, including some that are difficult evaluate with precision. The situation here may well be similar to many in strong interaction physics, in which a “cocktail” of effects is required[@taubes]. Considering that possible explanations in terms of new physics are not compelling[@Davidson:2002fb], considerable efforts must be applied before concluding that the NuTeV result really demonstrates a deficiency of the Standard Model.
We thank the USDOE for partial support and M. Ramsey-Musolf for a useful remark. This work was also supported by the Australian Research Council.
G. P. Zeller [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 091802 (2002). C. Boros, J. T. Londergan and A. W. Thomas, Phys. Rev. D [**59**]{}, 074021 (1999); Phys. Rev. D [**58**]{}, 114030 (1998). J. Kwiecinski and B. Badelek, Phys. Lett. B [**208**]{}, 508 (1988). W. Melnitchouk and A. W. Thomas, Phys. Rev. C [**52**]{}, 3373 (1995). C. Boros, F. M. Steffens, J. T. Londergan and A. W. Thomas, Phys. Lett. B [**468**]{}, 161 (1999). W. Melnitchouk and A. W. Thomas, Phys. Rev. C [**67**]{}, 038201 (2003) \[arXiv:hep-ex/0208016\]. For accuracy, one needs to know the fraction of the NuTeV events obtained at low $x$ and $Q^2$. Using $Q^2=10$ GeV$^2$ reflects a compromise between using using the average value $\sim20$ GeV$^2$ where the shadowing is completely absent, and the lower values characteristic of the low $x$ region where shadowing is important.
G. A. Miller, B. M. Nefkens and I. Slaus, Phys. Rept. [**194**]{}, 1 (1990). E. Sather, Phys. Lett. B [**274**]{}, 433 (1992); E. N. Rodionov, A. W. Thomas and J. T. Londergan, Mod. Phys. Lett. A [**9**]{}, 1799 (1994); J. T. Londergan and A. W. Thomas, Phys. Rev. D [**67**]{}, 111901 (2003) \[arXiv:hep-ph/0303155\]; J. T. Londergan and A. W. Thomas, Phys. Lett. B [**558**]{}, 132 (2003) \[arXiv:hep-ph/0301147\]. S. Kovalenko, I. Schmidt and J. J. Yang, Phys. Lett. B [**546**]{}, 68 (2002) \[arXiv:hep-ph/0207158\]; S. Kumano, Phys. Rev. D [**66**]{}, 111301 (2002) \[arXiv:hep-ph/0209200\]. S. A. Kulagin, Phys. Rev. D [**67**]{}, 091301 (2003) \[arXiv:hep-ph/0301045\]. G. Taubes, “Nobel Dreams:Power Deceit and the Ultimate Experiment”, Random House NY, NY (1993) S. Davidson, J. Phys. G [**29**]{}, 2001 (2003) \[arXiv:hep-ph/0209316\].
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.